Learn what not to overlook when it comes to structured data, AMP, JavaScript SEO, and mobile-first indexing.
Watch the full course for free: [ Ссылка ]
0:06 JavaScript SEO Basics, how it affects Google, and what Google did
1:44 The current process
2:44 The optimal scenario
3:07 Google is using Chrome 41 to render your site
3:55 Google’s Rich Results Testing tool
4:35 Summary
✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹
You might find it useful:
Tune up your website’s internal linking with the Site Audit tool:
➠ [ Ссылка ]
Learn how to use SEMrush Site Audit in our free course:
➠ [ Ссылка ]
✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹
Nowadays more than 97% of all sites are using some kind of JavaScript. This means that Google`s regular crawler can't see what is happening in those areas because Javascript is executed on the client's side. If you go to the browser, the changes are happening within the browser and not on the server side. So Google misses all of that.
The crawler that Google built ages ago has of course been updated frequently but it is still not capable of rendering a website. That means that if Google hadn't done something about it, they'd be missing out on all the changes in the frontend on the client's side.
So, one of the things Google had to do was to build a crawler that was capable of executing and rendering client side actions, mainly through JS. The goal was for Google to understand what you would be presented with in a modern web browser. Google wanted to „see“ that as well while crawling your website.
So essentially, in the past when you looked at the HTML markup, you saw what the crawler saw. Now it is entirely different. If you look at a website which is using a client-side JS framework, you only see some very cryptic stuff, not the real content itself. If you'd rendered the site, on the other hand, the content would be injected dynamically. That is what Google was concerned about – that they would eventually miss important content on the web.
So let's have a look at the process that is happening right now: Currently we have this old classic crawl still going on. Based on that, we have an instant first wave of indexing based on the classic crawl data. As more resources become available Google have started to render that same website, and to add further data taken from the rendering process. Google basically takes the additional information and adds it to the initial data that they have been collecting. So in a nutshell this means they still do the regular, old fashioned text-based crawl. And then on top of it, they have this new and beautiful JS rendering to see what's going on there as well, as if there is something „hidden“ there that they might have missed with the initial crawl.
Client-side JS means extra work for Google. The process has multiple steps and this second wave is slower, which ultimately leads to delayed indexing. The optimal scenario would be that the main content and all critical links would be directly available in the HTML source. Rel=Canonical and rel=ampHTML etc should be in the markup as well, to be sure that Google picks it up straight away. JS should and can further enhance a page's functionality, but not replace it.
Also, it's important to understand that Google right now is using a very old version of Chrome, Chrome 41, to render your site. This version was released back in March 2015, it is literally ancient.
If you compare the features of Chrome 41 with Chrome 66, you will see significant differences. Even if you debug in your current browser and that works well, with Google continuing to use a very old version, there may still be differences. So it would be wise, if you work with JS from an SEO perspective, to run an old version of Chrome with its Developer Console on your local machine to be able to understand what is going on there.
Also, Google has a Rich Results Testing Tool that shows the computed DOM. If you combine that with regular markup you could take something like diffchecker.com and compare markup vs. computed DOM to see what the major differences are. On the left-hand side you have the HTML source, on the right-hand side you have the computed DOM out of the rich test tool. Now you can easily spot differences and start debugging and understanding what's wrong and what not.
#TechnicalSEO #TechnicalSEOcourse #JSSEO #JavaScriptSEO #SEMrushAcademy
Ещё видео!