Ever since the launch of Microsoft’s Bing search engine, Bing has been integrating Yahoo’s search (i.e. Yahoo SEO and Yahoo Search Marketing) into Bing SEO and Bing Adcenter through Bing-Yahoo Search Alliance.
- Bing’s search algorithm for ranking sites (and/or web pages) – Compare Google and Bing search algorithms
- Bing’s organic search query – One alternative in keyword research is through Bing’s keyword tool
Bing’s Webmaster Center Blog has listed 2011′s updates of Bing Webmaster Tools regarding:
- Traffic Reports Depth
- Crawler Control
- Index Tracker
- User Permissions
- User Experience and Communications
- Block URLs
- Improved Speed
- Know More, Faster
- Expanded Crawl Details
- URL Normalization
- Webmaster Account Verification
- Adcenter CPC Integration
Traffic Reports Depth
Early in the year saw us adding more information in the Traffic reports. This includes the new detailed traffic stats feature. We are showing much more information to the webmasters including:
- Average impression and click positions for their top 100 queries.
- The specific pages showing up for a particular query, including their impressions, clicks, CTR, average impression and click positions, and position details.
- Their top 100 pages including their impressions, clicks, CTR, and average impression and click positions.
- The specific queries showing up for a particular page, including their impressions, clicks, CTR, average impression and click positions, and position details.
- Trending over time for their top 100 queries.
These additions made it easier for Webmasters to see which phrases were driving traffic, and to which pages on their websites that traffic was going.
Around this same time, maybe a little earlier, we addressed the need to install Silverlight to view the reports by moving the tools to HTML5. With this move, we made it easier for users to interact with our reports and saw report load times slightly improve as well. This move meant our reports were now viewable in all modern browsers and on smart phones as well.
Users asked loud and clear for ways to control the crawler, so in addition to the usual avenue open via your robots.txt, we built a tool that allows you to easily drag and drop a graph to set the crawl rate specific to your website. It’s as simple as creating the crawl patter you want. This means it’s easy to tailor our crawling efforts to times when your bandwidth is least affected. Telling us to stay away during peak business hours, and visit when everyone is asleep is as easy as clicking your mouse. We added a checkbox for you to let us know if we’d encounter AJAX URLs along the way, and there is an option to simply let us figure it all out for you.
The times shown were all GMT, so we added a layer to display your server time relative to GMT, hopefully making it easier to understand when your peak business hours hit, and allowing you to build a control graph around those hours.
Our Index Tracker tool received a thorough backend rewrite, improving coverage, performance, depth and latency. When these improvements released, even brand new websites saw all of their data as the Bing index held it. No longer having to wait 48+ hours meant Webmasters could understand what Bing was actually indexing and how it was performing much faster.
We introduced user controls and permissions. Site owners could now grant admin, read / write, or read-only access to other users for their site using their existing verification code. This made it much easier for businesses to manage their accounts, share access and control who could turn the knobs and pull the levers. This layer of control allowed businesses to manage account ownership and access at a new level, ensuring, as employees moved around, access and control of these tools remained in hand.
We enhanced the Sitemap UX by showing not only user submitted sitemaps, but all sitemaps we know of. We also expanded the types of feeds we accept enabling folks to submit Atom, RSS and xml.
User Experience and Communications
Working to make it easier to navigate inside the tools, we added a second layer to the tabbed navigation, which you see when logged in.
In an effort to improve communications, we enabled the ability to tell us where to email you and what to email you. If there was an alert appearing inside your account for malware, as an example, you can now opt to receive an email alert to the address of your choice. This made it much easier to know when something was happening.
Our “Block URLs” feature got a backend overhaul to include more built-in safeguards, helping to protect webmasters from accidently blocking large numbers of URLs. While the control is still in your hands, there are added layers of checks on our end that prompt confirmations before actions take place.
We did some work on how we cache all of your website data, as well, resulting in a dramatic drop in latency from, in some cases 5 seconds for a report to load, to 400 milliseconds for the same report now. That dramatically improved the performance of reporting.
Know More, Faster
Later in the year, we expanded user email communication preferences and controls. It’s easy to stay up to date on the latest announcements and alerts applied to your account. You can set frequency preferences, select options on which alerts you want notifications for and set a dedicated email and contact number.
Expanded Crawl Details
The expansion of Crawl Details meant you would now see all URLs attributed to header codes and notations made. This was a great step forward to understanding which URLs Bing was having issues accessing, and makes it much easier to understand if any high value URLs are being seen by us in ways you’d like to change.
To help you manage parameters in a more in depth manner, we expanded the number of parameters you could assign us to manage from 25 items to 50. This made it easier for larger and older sites to manage legacy URLs.
Webmaster Account Verification
This was one folks were asking for all year, and we managed to get it in with one of our Fall releases. Being able to verify a website via a DNS change. Past choices included adding a snippet to your <head> code or uploading a dedicated XML file to the root of your site, and both options remain as valid ways to verify your domains. This third option will allowed you to place a discrete CNAME record to your DNS to validate a domain as well.
Adcenter CPC Integration
In an effort to help businesses understand how they can easily expand their inbound traffic, we began showing CPC data from Adcenter inside your Webmaster accounts. When looking at Traffic data, to the left of the keywords you now see CPC amounts associated with those keywords in Adcenter. This is an excellent way to understand the practical cost of driving more traffic through paid search, and can help in targeting keywords you perform well for in organic search, thereby increasing your business’s footprint on the search results page.
If a crawler can’t access your content, the content won’t be indexed by search engines, nor will it be ranked. Enable and use XML sitemaps with a low error rate to build trust with search engines. Make sure your website navigation is clean and strive for a simple, search-friendly URL structure. This means the URL is keyword rich and avoids session variables or docIDs. We suggest you use robots.txt files to instruct the crawlers on how to interact with your webpage and more easily find your content.
Provide a site structure with strong functionality, which helps encourage link building. Linking to both trusted outside sources and internal content shows a search engine you care about users getting the best data around their query. In addition, HTML sitemaps ensure a good user experience and help search engines discover all your pages and content.
When you plan your website’s content hierarchy, take care in aligning your content with what searchers are looking for and their intent during their journey on your URL. Basic keyword research can also help you understand how searchers are interacting with search engines, and can help craft your content strategy. Also avoid placing links and content inside rich media applications, such as Flash and Silverlight, which make it almost impossible for crawlers to find and read the content. Just gotta have your rich media? Not to worry. Make sure a solid downlevel experience exists and we’ll still find your content.
Get them up and running and keep them clean. By following your feeds, it’s easier for the engine to get your latest content. This means we see it faster, so indexing, ranking and showing in the SERPs can happen faster. Want to really impress Bing? Get into your Bing Webmaster account and insert your RSS feed URL into the sitemap submission flow.
Being able to tell the engine which version of your URL you’d like to have attributed as the original is pretty useful. This handy command can help you build value on the version of the URL that matters most to you, and help combine value attributed to many version of the URL into one location, helping boost the rank of that one, original version of the URL.
The tough part here is usually getting the code installed on each page, and of course, each instance needs to point to one selected URL for this to work. We’d rather you didn’t use the
rel=canonicalto cover issues where your CMS needs work. If the CMS is causing instances of duplicate URLs to occur, you should fix the problem. We see increasing usage of the
rel=canonicalacross huge numbers of pages on large websites. While we don’t really like this, either, we can work with it, as we understand the need to balance the workload and the returns.
The bottom line, though, remains that you need to be able to manage the
rel=canonical, and if you don’t have control over when its deployed, which URLs they point to and when it is used, you need to work it out.
Seems like a no-brainer, this one, but so many websites remain without a robots.txt file. In some cases it’s a purely missed opportunity, or the site owner is unaware of what a robots.txt file is. In other cases, though, it’s the inability to place a file on the root of your domain.
Regardless of the blocker, the robots.txt file is one of the most important files you can place on a web-server. Given it is the location search crawlers reference to understand how to interact with the website, it’s a pretty powerful document. If you do not have access to place your robots.txt in the correct location, you need to understand why this control is lacking. Then solve the problem.
This is another file missing from a huge number of websites today. Another important file the search crawlers look for. One that is referenced inside the robots.txt mentioned above, and one which can help get more of your pages into the search index. Overall, it’s almost as important as the robots.txt file, and if you cannot place these files in a location the crawlers can find, you need to fix this issue.
This file typically lives on the root of the domain, but for larger websites, where multiple files may exist to capture all of the URLs present, maybe only a sitemap directory file is on the root. Whichever your case, it’s important he crawlers can find it, and if you cannot access the root on your server to place files there, it’s a missed opportunity.
Marking up your content has been around for a few years now, and with the launch earlier this year of
www.schema.org, the major engines have made a clear statement there is value in marking up your content. By embedding these elements in your page code, we can extract information more accurately and use that information to provide increasingly richer search results. You are credited as the source for the data used. This is important work for sites seeking to differentiate themselves from the pack as we move into 2012.
Websites need to balance the future value versus the workload to implement, and still need to keep in mind that implementing these elements won’t immediately increase rankings. This works helps us better understand relevancy.
What’s important here is planning for the work and ensuring you have buy in across your organization.
Title, Description, Alt Tags, etc.
Managing your title tags, meta descriptions, alt tags, etc. is still important. All these basic, on page/technical SEO factors add up to help us understand what your content is relevant for. The bottom line with these items is you need to be able to manage them. If you cannot change these elements on a per-page basis, you lack needed control. We only mention three here, but you can think of all of them.
That meta description you don’t care to alert and let appear across all you pages? While writing unique ones for each page won’t suddenly vault your pages to number 1 in the rankings, it can make the difference between a searcher clicking on your result in the search results or not. Think of the meta description as the “call to action” then, when read by a searcher, tells them why they should click your result. Better to have your words appearing in our search results than random text we take from the page because your meta description was low quality.
We use the meta description as an example, but the same level of thought should be applied to all of your on page optimization efforts.
This might seem pretty obvious, but with so many website still aggregating content or using article services to build out pages, its worth mentioning. We talked about how to build good content a little while back and the value of “article-site content“, but we still see websites trying to get ahead in the rankings by basing their websites on a thin content model. Such sites are often very polished looking, and while may provide value in aggregating a lot of items in one location, they still aren’t adding anything new and unique to the conversation. A website designed primarily to hold affiliate links that get the user off the website quickly and into a shopping cart on another website is an example of a thin content website, though not the only example. Affiliate links on a website can be completely useful, but when the content on the site is duplicating that from the original website, there’s simply no reason why that thin content site should outrank the real deal.
Control as applied to content means you can influence the creation of quality content on the website. If the website is not producing unique, quality content, it won’t last long in the search results.
This is pretty straight forward. You need to be able to place the verification code in place to use webmaster tools. This could be in the form of embedding a tag in the <head> code of your webpage, or by a notation added in the DNS for the website. No matter the option used, you need to have access to make this happen, and if you do not have that access, you lack the control needed to then be able to access some of the richest data about your website online – our webmaster tools data.
If you don’t have a website your visitors love, you’re missing an opportunity. Get cracking on a user experience review and see where you’re bleeding users. By staying tuned in to what users like and dislike about your website, you can make the myriad small changes needed to field a UX-winning site. And if you keep visitors happy, they share you more often with friends, netting you more links. Visitors are also more likely to come back to you again in the future if they like the site and find it easy to get what they’re after.
It might seem like a small thing, to focus on the user experience, but that UX directly influences the happiness of visitors and the engines can see that in how they reach to your site when they see it ranked in search results. If you don’t have input on UX improvements, you need to push. This is an important aspect of optimizing any website.
Social Sharing Integration
This almost goes without saying, but we still see so many websites not involved socially with their visitors. Social isn’t going away folks, and while it does take work, skipping social integration is a missed opportunity. People share what they like with friends. If you have social sharing icons embedded in your pages, you are much more likely to get shared by visitors. At the very least, you need to cover this angle. Get the buttons embedded into your pages so your visitors can share you content with their friends and followers.
One of the major SEO issues to any large websites is duplicate content/URLs. Bing Webmaster Central Blog reveals how to handle duplicate content URLs and how canonical URLs, 301 permanent redirects and 302 temporary redirects work:
The 301 redirect defines a redirect which tells the search engine the content has moved permanently to a new location. This is preferred as it clearly states the intent to move and instructs the engine to transfer any value the URL has accrued from the old URL to the new location. It’s important to know that the 301 redirect does not pass all of the value from an old URL to a new one. The new URL does need to build its own level of trust with the engines, which is why we won’t simply transfer full value to new URLs.
The 302 redirect defines a redirect which states the content has moved temporarily and will return to its original URL shortly. This redirect will still move people to your content, but the engines are essentially being told to hold their assigned values on the original URLs and wait for the content to return to the original URL. This is not what you want. Be careful when requesting redirects be implemented, and clearly define that you need to have a 301 in place. Otherwise, you can lose value from your original URLs, leaving your new ones to struggle on their own.
It’s important to understand that a rel=canonical is not a true redirect. There has been a lot written that it’s “basically like a 301 redirect”, which can be misleading. The purpose of the
rel=canonicalis to help the engines understand when an individual URL is essentially a duplicate of another. The
rel=canonicalelement does suggest to the engine that any value assigned to the duplicate URL be assigned to the original URL, though. This is similar to how the 301 functions, which is the origin of the over-simplification noted above.
The biggest difference between the two is that while a 301 redirect physically moves a visitor to the new URL, the
rel=canonicaldoes not physically move anyone anywhere.
Something else you need to keep in mind when using the
rel=canonicalis that it was never intended to appear across large numbers of pages. We’re already seeing a lot of implementations where the command is being used incorrectly. To be clear, using the
rel=canonicaldoesn’t really hurt you. But, it doesn’t help us trust the signal when you use it incorrectly across thousands of pages, yet correctly across a few others on your website.
A lot of websites have
rel=canonicalsin place as placeholders within their page code. Its best to leave them blank rather than point them at themselves. Pointing a
rel=canonicalat the page it is installed in essentially tells us “this page is a copy of itself. Please pass any value from itself to itself.” No need for that.
We do understand that doing work at scale requires some compromises, as it’s not easy to implement anything on a large site page by page. In such cases, leave the rel=canonical blank until needed.
Review the definitions of 301/302 redirects in HTTP status codes 2xx, 3xx, 4xx, 5xx.
Other Search Engines’ Webmaster Tools
Besides Bing, other search engines also offer webmaster tools to webmasters:
- Google, World’s largest search engine (with multiple languages): Google Webmaster Tools
- Yandex, Russian’s number one search engine: Yandex Webmaster Tool (English)
- Baidu, Chinese’s leading search engine: Baidu Webmaster Tools