Here are some of the questions submitted by the session attendees and my answers.
What did you use to test the rendering success rate [in your session example]?
We set up an automated monitoring script that checks a considerable number of pages on the site every day at 8 a.m. The script checks multiple elements on each page. One of the elements we’re checking is the presence of the language selector because we found that the language selector is not there if prerendering fails. Once we know how many pages the script checked (the number is the same every day) and how many times the prerendering failed (the language selector not found), we can calculate the rendering success rate.
If you don’t have a monitoring solution, you can use Screaming Frog to achieve a similar result.
- Set Rendering to “Text Only” and switch the user agent to Google Smartphone.
- Use Custom Search or Custom Extraction to target the element that’s not present when the prerendering process fails.
- Crawl the site (or a significant sample of pages).
- Repeat the crawl multiple times over the next week.
- Count the number of times when the monitored element is present and calculate the rendering success rate.
Do you have any tips for dealing with dynamic rendering when your site uses external A/B testing tools that are inherently client-side rendering?
I’d want Google to see only one version of a page. This means I’d serve the old version to search engines until the new tested design is permanent. You’re already doing user agent detection because you use dynamic rendering so you can block adding the A/B testing code to a page when a request comes from a search engine bot and add the A/B testing code only if the page goes to a user.
If the content in question is visible by default and you want to hide it after an interaction, that’s fine. Google doesn’t click on or hover over elements.
I have exactly the same new implementation as company White – with opacity. This has been bothering me as the pages that migrated to this new implementation are not performing as good as previously. Can you confirm you didn’t see any issues with opacity, and there’s no need to try to address/change it?
Every website is different, so I can speak only to the one I have encountered. We didn’t see any noticeable improvement after removing the initial opacity:0, but it was a site with massive branded traffic. Generally, if your website doesn’t receive much of branded traffic and relies heavily on non-branded traffic, I would want to remove opacity:0 sooner rather than later. If the vast majority of your organic traffic comes from branded queries, I’d assign a lower priority to this but still want to get it done at some point.
How can you work closely with devs on these checks if they are remote or in India with a big time difference?
I often work with people in a different city or continent and one thing that’s always worked for me is Skype/Slack calls. I wake up early or stay late for a call rather than exchange long emails. The calls help me to understand their workflow and challenges better while I get a chance to explain the reasons why automated testing should be in place and to address their immediate questions.
Once both sides are clear on why we’re doing it, I still consider it essential to create a ticket with concise but thorough requirements and acceptance criteria to avoid any miscommunication.
Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.
February 22, 2024
February 21, 2024
February 20, 2024