`Back in 2018, it was stated by Google that page swiftness will now impact the ranking on both desktop and mobile search outcomes. However, it’s not just the position that gets affected by page speed. As per studies, a 3-second page loading duration leads to a 13% bounce rate, signifying that page speed isn’t solely about search results but also the user experience.
So, if your website is sluggish, what should you do? Enhancing website performance isn’t a walk in the park; thus, many firms merely do minor optimizations to ensure their positions in search results.
Read on to discover how we transitioned from a 30 performance rating to 99 and why (besides being incredibly proud of our accomplishment) owning one of the fastest websites in the sector is worth the dedication.
Commencement: The Obstacle and Necessity of Solving It
By revamping our website, our focus was set on achieving three main objectives:
- Augmenting conversion rates through swift page loads, improved UX, and top-notch content.
- Boosting SEO and attaining better visibility on search engines.
- Enhancing our brand image by showcasing new projects and customer reviews.
These objectives steered us in determining the features we needed to integrate. Here are the pivotal functionalities we came up with to realize our objectives:
- Novelly structured web pages with an impressive appearance to exhibit our services, sectors, technologies, and case studies.
- Enhanced blog post pages with search functionality, tags, and categories.
- ‘Get in touch’ and alternative forms for convenient communication.
- Various third-party integrations: Google Tag Manager, Cookiebot, and Hubspot among others.
Overall, our aim was to establish a secure static website. Hence, we opted for GatsbyJS, a React framework seamlessly aligning with this aim.
Why Gatsby, you inquire? The underlying strength of @GatsbyJS lies in its capability to metamorphose web development into a smooth and rapid process by harmonizing pre-built HTML, CSS, and JavaScript during compilation. Constructed with React, GatsbyJS enabled us to create our individual templates and components, leveraging extensions from the React ecosystem.
Another aspect that demanded attention for the revamped website was the API backend. Our preference was @Strapi, an open-source content management system. This system not only permitted us to tailor our data structures and expand API functionality for managing diverse content formats like text, images, and videos but also saved us a substantial amount of time and resources – courtesy of its user-friendly admin panel and supplementary features that circumvented the necessity to develop everything from the ground up.
Lastly, for server requests, we harnessed GraphQL. Seamlessly compatible with GatsbyJS, it enabled us to define the query format and specify solely the requisite fields and relationships, thus evading superfluous data which, in turn, facilitated quicker page loading times.
The amalgamation of technologies empowered us to achieve optimal website stability and security
Upon completion, the new website manifested itself, rewarding us with elevated rankings and a refreshed design. Initially…but subsequently, things began to deteriorate gradually. Following the addition of new pages, an array of modifications, and integration of third-party software, the codebase burgeoned to such an extent that our performance ratings on Google’s Lighthouse, a tool analyzing webpage speed, plummeted significantly.
The Lighthouse scores our website possessed before commencing improvement efforts
With such diminished scores, Google wouldn’t position us favorably. On top of that, even if it did, users wouldn’t endure the wait for it to load.
Research from Ericsson illustrated that the strain induced by page loading delays on mobile devices is akin to facing a horror film. Every second proves pivotal in terms of performance. Understanding this hardship, we dove into improving our site’s performance right off the bat.
Elevating Website Efficiency Methodically
To accomplish a noteworthy performance boost, we scrutinized the principal issues plaguing the website and kicked off efforts to rectify each of them. Below elucidates how we surmounted the primary challenges.
1. Doubling the Acceleration of Network Connection Setup
GMetrics, a tool delineating website loading patterns, aided us in pinpointing resources from external domains responsible for elongated connection durations.
The purple timing depicts the duration for a Domain Name Server to receive the request for a domain name’s IP address, while light green illustrates server connection time. These intervals were protracted, and therefore, we expedited them by curbing server connection time.
For expediting the website, we implemented various technical enhancements. Initially, we deployed Resource Hints, leveraging HTML <link>
attributes. These hints facilitate the browser in proactively establishing connections before resource requests are forwarded to the server. In cases of resources from external domains, we incorporated DNS-prefetch and pre-connect hints, instructing the browser in advance to initiate connections with specified domains. Thus, when the browser necessitates resources from external domains, they load expediently as the preliminary setup is already completed.
Regarding internal files such as scripts and stylesheets, we employed “preloading” and “prefetch” hints. These cues ensure that when users navigate through our site’s pages, the essential resources are fetched from their local cache, eliminating the need to retrieve them from a remote server, thereby expediting the process.
These optimizations helped shrink *Largest Contentful Paint from 2.9 seconds to 0.8 and *First Contentful Paint from 1 second to 0.6, enabling the browser to expedite resource loading.
*LCP, aEssential Web Vitals measure that quantifies the duration from when the user initiates loading the page until the most prominent image emerges in the viewport.
*FCP signifies the interval between the user’s initial access to the page and when the content renders on the screen.
Enhancing font loading with rapid text rendering
Within our platform, we employ a blend of personalized and publicly available fonts. We host the public font “Manrope” from Google through the Fontsource collection as it represents the quickest approach to acquire a font.
Nevertheless, a delay occurred in loading the font package, despite the text being retrieved from the HTML file. To circumvent this disjoint between text and font loading, our initial tactic involved displaying an obscured fallback font until our preferred web font fully loaded. While effective, this method elongated the LCP metric, prompting us to adjust our strategy in favor of swift text rendering.
Here’s the workflow: Initially, the browser identifies an accessible font within the specified font family and deploys it for text rendering. Subsequently, upon loading the original font, the browser seamlessly substitutes the initial font with the necessary one. This strategy stands as the speediest font loading method, although it carries a risk of unexpected text movements due to font size alterations. To mitigate layout shifts and refine the Cumulative Layout Shift (CLS, a gauge for tracking the unexpected element displacements during page loading), we implemented CSS Fonts Module Level 5, a suite of specifications encompassing attributes like size-adjust, descent-override, and line-gap-override, which aided us in averting content shifts.
CSS Fonts Module Level 5 performed impeccably, ensuring font changes without affecting the layout
Consequently, three metrics – First Contentful Paint, Largest Contentful Paint, and Cumulative Layout Shift were enhanced. In particular, CLS was reduced from 0,143 to 0 seconds.
Enhancing the loading of pictures and videos
Another performance challenge we encountered was substantial network payloads. The website audit uncovered the necessity to streamline the loading process of images and videos by diminishing their sizes without compromising quality.
Yellow Lab Tools evaluations indicated that videos and images significantly burdened our website among all file types
Our website comprises two image variants – vector images (in Scalable Vector Graphics (SVG) format) and raster images (in PNG and JPG formats). Different optimization techniques were employed for each category.
3.1 Enhancement of vector images
Vector images generally boast a lightweight nature; however, the issue surfaced when each image necessitated a new HTTP request for loading, thereby slowing down the performance. To tackle this, we opted for inline SVG adoption, enabling direct embedding of the image into HTML. This was facilitated through the ‘gatsby-plugin-react-svg’ , a plugin within the Gatsby framework streamlining the process.
3.2 Enhancement of raster images
To hasten the loading of PNG and JPG images, we transitioned them to the contemporary .webP format, guaranteeing superior compression with minimal quality compromise. Similarly, video conversions were made from .mp4 to .webM, ensuring enhanced compression and quality. While .webP and .webM were adopted as principal formats, compatibility issues with older browser versions were anticipated. To mitigate this, fallbacks to .png, .jpg, and .mp4 formats were maintained for browsers lacking support for .webP and .webM.
Moreover, the image display required optimization across diverse devices. Various devices such as mobiles, tablets, laptops, or 4K monitors demanded distinct image dimensions. Uploading identical images for both mobiles and 4K monitors inflated the website loading duration. As an antidote, we implemented adaptive graphics, embedding diverse image options in the code to enable the browser to select the best image based on factors like window size, screen resolution, network speed, and more. By leveraging packages like ‘gatsby-background-image’ and ‘gatsby-image’ , we created multiple image variants tailored for distinct devices. In the Network tab snapshot below, the “Toggle device toolbar” is displayed where we switched between page display modes.
Here, we enabled a device toolbar to refine image loading for diverse devices
Displayed here is the same file optimized for varying devices with reduced size
3.3 Lazy loading for behind-the-scenes images
Lastly, we implemented lazy loading coupled with a blurred image effect to guarantee image loading solely when the user views them, enriching the experience for users with limited
pricing schedules or sluggish internet connections.
This exemplifies how the function of lazy loading of visual content operates
All the hard graft paid off – we notably lessened the size of the visuals on the website without compromising quality.
Subsequent to enhancements, Yellow Lab Tools reveals that the video – unsurprisingly – still occupies a larger segment of the page’s weight, but the load of visuals has greatly reduced (purple segment)
The enhancements considerably enriched user experience and gauges like CLS and LCP. By exclusively utilizing visual optimization extensions within the Gatsby ecosystem, we facilitated the process of enhancing performance.
4. Streamlining caching
We must admit that caching, a method where the web server preserves a duplicate of the web page, was initially disregarded during the website creation. It was undeniably a missed prospect as effective caching could expedite the website and lessen server burden. We resolved to catch up on cache optimization but encountered various specific obstacles.
Our objective was to hasten the loading of resources by retaining them stored for upcoming visits rather than downloading them on every occasion. To attain this, we employed the ‘cache-control’ attribute in the HTTP header and established the duration for how long the file should endure in the cache. However, another predicament emerged. Upon making modifications to the website content or design, the alterations wouldn’t instantly display because the browser persisted in utilizing the old stored copy.
How did we tackle it? We appended a hash to the file denominations, which refreshes with every file edit. This approach enabled us to preserve files in the cache for an extended period while still facilitating changes effortlessly. As a result, the First Contentful Paint (FCP) metric transitioned from elevated to moderate.
At present, we are contemplating an additional form of caching termed browser caching. Unlike server caching, which necessitates a perpetual server connection and consumes bandwidth to load the response, browser caching enables users to access the webpage without a network connection. Nevertheless, it also harbors its constraints – if the user’s device is running low on storage space, the browser might expunge older data to accommodate new content.
To provide you with a better comprehension, here is a juxtaposition of how server-side caching and client-side caching function
5. Eradicating substantial JavaScript collections
Bundles essentially comprise of files, generally JavaScript, CSS, and other properties, consolidated into a unified file for more effective distribution. As our website advanced in intricacy, the size of our bundles consistently expanded, overweighting the website. It was an appropriate time to pinpoint problematic segments and discard them.
Several beneficial tools are available for recognizing and rectifying troublesome bundles. One of them, Bundlephobia, furnishes insights into how much an NPM package contributes to bundle size, aiding in evading excessively large file compilations. Import Cost, a VSCode Extension, computes the ‘cost’ of imported packages, aiding in making well-informed determinations. As part of our optimization tactic, we have substituted weighty JS libraries, such as substituting the widely-used ‘classnames’ package with the more efficient ‘clsx’, a swifter and more petite drop-in replacement tailored to our website requisites.
Subsequently, utilizing the Webpack Bundle Analyzer plugin, we pinpointed problematic areas in bundles.
Bundle Analyzer unveiled our most substantial bundles – client-location and map-with-flashing-dots
To dissect these conglomerates of files, we segmented large bundles into smaller components utilizing code-splitting and lazy-loading. Webpack’s integrated code-splitting feature permitted us to modify the import command into a function that directs to a file path instead of directly importing files. This generates a promise, which is akin to a pledge that the file will be loaded. When a similar structure emerges in the code, this promise is honored, and the file is loaded.
For non-essential views, HTML, and JS, we harnessed dynamic imports that allowed us to diminish the initial size of the webpage. Dynamic in this context implies that the website determines whether to load supplementary files based on specific conditions, ensuring it doesn’t disrupt the user experience.
Here is the outcome of our optimizations: a refined loading process with no isolated extensive bundles
Post segmenting substantial bundles into smaller segments, we effectively lightened the page and eliminated all extensive file collections.
6. Condensing HTML and CSS through code compression
Code compression revolutionized our approach. By compressing the code and omitting unnecessary spaces, we achieved reduced file sizes, prompt downloads, and minimized bandwidth usage. Presently, our server dispenses website files in a gzip format, enhancing speed and notably elevating crucial metrics like FCP and FID (First Input Delay, a metric that gauges the delay between the initial click and the browser’s response) from suboptimal to moderate performance levels.
7. Enhancing Code Quality and Addressing Memory Leaks
Upon closer examination, we unearthed some cunning memory leaks within our codebase. The issue stemmed from lingering objects in memory even after they were no longer required, leading to a crowded environment.
To rectify this, we implemented two approaches. In cases where the event listener is only essential once, we opted for the {only: true} parameter. This parameter guarantees the automatic removal of the listener after it is triggered, mitigating any memory-related complications. Additionally, we ensured to explicitly eliminate event listeners by utilizing the removeEventListener() method. This step was taken either before removing the element or once the listeners became obsolete. Such actions facilitated a seamless disconnection between elements and functions, thus averting memory leaks.
Another aspect to consider regarding addEventListener is incorporating the { passive: true } parameter. This adjustment proved beneficial for scrolling interactions, preventing interface disruptions and contributing to an enhanced user experience.
useEffect(() => { setScrolled(document.documentElement.scrollTop > 50); window.addEventListener('scroll', handleScroll, { passive: true }); return () => { document.body.style.overflowY = 'scroll'; window.removeEventListener('scroll', handleScroll); }; }, []);
From Before to After: The Influence of Performance Enhancements on Our Website
To be frank, our initial position was less than optimal – grappling with sluggish loading times and an overwhelmed server that seriously compromised the user experience. However, armed with valuable insights and optimization strategies, we dedicated ourselves to the task and reaped impressive outcomes. Were we satisfied with the results? Undoubtedly.
PageSpeed Insight presented us with a commendable report card, highlighting substantial enhancements
A month-long performance evaluation showcased remarkable progress – all metrics demonstrated multiple improvements compared to our starting point
These transformations would not have been achievable without the countless hours and days of dedicated effort; however, another factor significantly contributing to our triumph was the utilization of the appropriate tools. Certain tools not only saved us time during our endeavors but also deserve acknowledgment.
Instruments for Elevating Performance: PageSpeed Insights and Lighthouse
In the realm of performance evaluation and enhancement, two Google utilities stand out: PageSpeed Insights and Lighthouse. These tools scrutinize various facets of a webpage to deliver insights on its velocity, user experience, and overall performance. Here are the metrics they collectively assess:
- Largest Contentful Paint (LCP) – gauges the duration for the largest content element (e.g., an image or text block) in the visible area to become visible to the user.
- First Contentful Paint (FCP) – measures the time for the browser to render the initial content piece, such as text or an image.
- First Input Delay (FID) – evaluates the delay between a user’s primary interaction (e.g., button click) and the browser’s response to that action.
- Cumulative Layout Shift (CLS) – tallies the total of all individual layout shift scores occurring during the lifespan of the page. A layout shift happens when visible elements on a page move unexpectedly.
- Interaction to Next Paint (INP) – calculates the time for the browser to react to user interactions (e.g., clicks or taps) by updating the visual content on the page.
- Time to First Byte (TTFB) – determines the duration for the browser to receive the initial byte of data from the server following a request.
- Total Blocking Time (TBT) – quantifies the total duration during which the browser’s primary thread is obstructed and incapable of responding to user inputs.
- TTI (Time To Interactive) – assesses the time for a page to become entirely interactive, implying all requisite resources are loaded, and the page promptly responds to user inputs.
- FID (First Input Delay) – scrutinizes the delay between a user’s initial interaction and the browser’s reaction to that interaction.
You can leverage either of these tools to garner insights into your website’s performance. Lighthouse offers more flexibility and detailed insights, while PageSpeed Insights concentrates on monitoring individual page performance.
While these two are prominently utilized, they are not the sole recommendations from our end. Here are some additional resources that could prove invaluable on your performance optimization expedition.
Supplementary Tools for a Comprehensive Exploration
GTMetrix: a robust tool that not only audits pages but also visualizes loading in a readily understandable format, making performance enhancements tangible.
Yellow Lab Tools: an insightful dashboard that categorizes and evaluates numerous metrics. It not only identifies issues but also furnishes detailed suggestions for optimization.
These tools have served as our guiding beacons, aiding us in boosting performance and delivering an exceptional user experience. Keep them in mind as you embark on enhancing your website’s performance.
Conclusion
If your business hinges on your website, neglecting its performance is not an option. This aspect can propel you to the forefront of Google search results or, conversely, prompt users to migrate to speedier websites offering superior experiences. To sidestep such a predicament, we’ve shared the tale of our website’s metamorphosis.
Our journey in pursuit of optimum performance continues, as there is always room for enhancement. Recently, we’ve observed some challenges with GatsbyJS and are contemplating a transition to NextJS. However, this deliberation warrants its own discussion in a separate article.
Just as no two businesses are alike, there exists no universal blueprint for boosting your site’s performance. Each scenario is unique, necessitating a tailored approach to address the specific challenges at play in your situation. While not an effortless task, we stand ready to assist. If you seek to elevate your website’s performance, feel free to reach out to us, and we’ll explore what we can do for you.
`