NON – Search optimized site

Hi all,

I have built a website in PHP/HTML/CSS and yes it has a space of example. com. au

Contrary to just what most developers target, the website is to be viewed by people (via search engines). Only people who know and key in the actual URL can access that.

Any advice here’s it a uncomplicated. htaccess file

Gives thanks!

even less difficult.

Many websites make use of robots. txt to prohibit Google from indexing their pages thereby preventing the pages from arriving in the se. But the fact is the fact that robots. txt doesnt actually do the second item, even though it can prevent your internet site from being listed.

< meta name=" robots" content=" noindex, nofollow" /> 

Basicaly, allow your internet site to be listed, but block it from showing within the search listings by adding that meta tag inside header of each page.

Actually, there are two better ways of accomplish the exact goal.

1) Produce a robots. txt file having these two lines in it (nothing else).

User-agent: *
Disallow: /

This tells any kind of bot to piss away from and leave the entire site alone. The difference in between Harmonic’s way and mine is always that Harmonic’s way will allow bots to at least crawl the web site, even though the idea won’t index the particular page (sorry, dude, although I gotta slag an individual for that).

2) If you would like keep people who aren’t designed to see it out and about, then password-protect your website. How you choose to get this done is up to your account, but that’s the rule… if I really do not want something indexed or perhaps found, I password-protect them. That way, rogue bots who disregard the robots method can’t get on it, unauthorized people can’t get from it, etc.

Thanks heaps this really is what I am looking for. Something that explains to ANY bot far too PISS OFF —

User-agent: *
Disallow: /

Would the over code just sit from a. txt file within the site root

The reason you does not desire to password protect is they dont want to have to give out there passwords (even if they are just generic).

In essence, they want somebody that has the web address to access it, but not someone who’s just googling general search engine terms within their community. I know it’s not TOTALLY private, but private enough for everyone.


I was beneath the impression (From previous experience) that despite the fact that disallow the spiders to crawl and index your website, your site may still appear while in the SERPs (eg. If this became done on webdesignforums. net and you also used robots. txt, your website would NOT become indexed, but a record would still appear delivering who googles this domain). This is actually why I reported screw it, allow the pages for being indexed, but disallow these folks from being posted! My way is mostly geared towards this odd page over a site though, not a complete domain.

TheGame brings up a fantastic point with that password protection however! Some bots neglect robots. txt and make an effort to index regardless.

and also yes, that code should begin a text report called robots. txt

Indeed, it would. It might sit in automations. txt in the site root (notice the naming… all reduce case. it’s that picky).

Head you, if they’re trying to keep this hidden through the public, which is understandable in some cases, it probably means they’re trying to keep this disguised . from competitors in the process. Just something to think about.

There is really a URL-only link that may show up (but normally doesn’t), but it does not mean the web page is crawled in addition to indexed. It just means that a se is aware that the link exists for any particular page. I wouldn’t say that’s a challenge in this specific case, though… the general open isn’t going to be familiar with the site, it will likely be a fairly low-traffic web page, and it doesn’t could be seen as it’s one that is going to be shared on multilple web sites.

Where general terms are involved, those won’t make an appearance. And if/when that becomes a challenge (and it commonly doesn’t), set way up a Google Website owner Tools account, validate the domain, and now have it removed with the URL removal instrument (I’ve done the following before, and requires about 30 minutes).

You could also redirect obvious bots to talk about google. ca via the user agent (since rather well any bot has the saying " bot" somewhere within the user agent string).

That is certainly partly why POST suggested password safeguard. Even though you doesn’t want to travel there, at least it really is as close since you can get to your sure thing.

Just what exactly I can accumulate, even though I put the code

User-agent: *
Disallow: /

There may be still always an occasion that Search Motors will index it


Working having robots. txt and using most of the other techniques right here should be viewed as ways to reduce exposure, not eliminate it. All it might take is your link from a person’s site to joining your downline. That link may easily be indexed in the major search engines.

The only sure way is usually to password protect. You have a minimal, single password access that is a same for all your customers whom you intend to see the web site. (You would not need to take this solitary password approach for anything this was mega sensitive. But it will keep out both search engines like google and casual clients.

The WEB SITE, yes. The subject material, probably not. It’s a pretty rare matter, though, and usually takes some action about the part of a person to trigger the actual behavior. I’ve solely ever seen URL-only sellers for blocked URLs with one occasion, and I became able to work with 301s to crystal clear them.

Rather than that, what rickidoo claimed.

This entry was posted in Web Design and tagged , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *