أنشئ حسابًا أو سجّل الدخول للانضمام إلى مجتمعك المهني.
Both are supported by all crawlers which respect webmasters wishes. Not all do, but against them neither technique is sufficient.
You can use robots.txt rules for general things, like disallow whole sections of your site. If you say Disallow: /family then all links starting with /family are not indexed by a crawler.
Meta tag can be used to disallow a single page. Pages disallowed by meta tags do not affect sub pages in the page hierarchy. If you have meta disallow tag on /work, it does not prevent a crawler from accessing /work/my-publications if there is a link to it on an allowed page.
On the most basic level, neither the meta robots tag or the Robots.txt has authority over the other – but rather the “noindex” request has authority over the “index” request.
What I’ve suggested to development teams I’ve worked with in the past is to set the Robots.txt file to allow all crawlers, then use the meta robots tag to request that a page not be indexed on a page by page basis. This is easy to remember, easy to update, and makes it easy to avoid conflicts and confusion.
Meta Robots tag is much better as it helps in forcing the search engine crawlers not to index and display the hidden pages in your server.
Personally I think the meta tag is even more useful than the robots.txt file.
The robots.txt file is intended to PREVENT bots from spidering files in the first place.
The robots meta tag can in some ways be used to get indexed files deindexed. And it
is more versatile as a tool to 'guide' bots through your website without having them
index all the files.
I always put up a robots.txt file BEFORE I upload any of the other files.
Then I fine-tune using the meta tag.
There is no real "MAJOR" difference between the files. They both do the same work however the META tag is not as reliable as the .txt This is because I think (could be wrong) that the Robot.txt is a seperate file. The intention of the file,tag is to inform the search engines ROBOT not to index certain pages. IE Contact us ETC ETC... Hope that helps...
Robots.txt is the file to block certain pages/content from search engines. If you want to block folders or any section of website robots.txt file is the better option.
If you want to block some specific pages you can use meta robot tags. In some situation you really need to use these tags for eg. listing pages.
There is a very huge difference between meta robot and robots.txt.
In robots.txt, we ask crawlers which page you have to crawl and which one you have to exclude but we don't ask crawler to not to index those excluded pages from crawling.
But if we use meta robots tag, we can ask search engine crawlers not to index this page.The tag to be used for this is:
<#meta name = "robot name", content = "noindex"> (remove #)
OR
<#meta name = "robot name", content = "follow, noindex"> (remove #)
In the second meta tag, I have asked robot to follow that URL but not to index in search engine.
You can have any one but if your website has plenty of web pages then robots.txt is easy and reduces time complexity
You should use 'noindex,follow' in a robots meta tag, rather than robots.txt, because it will allow the link juice to pass through. It is better from a SEO perspective.
While there is little difference between the two, the implementation of either could affect your SERPs. So it is best to be aware of what is end goal of the activity that you want to implement. Robots.txt files are best for disallowing a whole section of a site, such as a category whereas a meta tag is more efficient at disallowing single files and pages. You could choose to use both a meta robots tag and a robots.txt file as neither has authority over the other, but “noindex” always has authority over “index” requests.