Keeping Logs and Stats

This is just a quick post to give praise for a tool that I have been using for 3 years now and it really is a sweet freebie to find out there in the world of FREE stats and long term logs.

market leap

-Link Popularity Check

-Keyword Verification

-Search Engine Saturation

These links look just like the thousands of other tools that are offered on the web. The reason I like these tools is they keep a running log from the first day I used one of the tools on a URL. How cool is that!

I have moved hosted and changed IP addys several times (finding a good host when you first start out causes a bunch of headaches) and these tool just keep a running total of all the URLs I have entered. It’s nice to look at months and years of link popularity. I have planned months of links strategies & campaigns and I can check back and see how they have done.

Also helps to see what the SERPs are doing out there, and over time. Google and Yahoo stats are, for obvious reasons, Candy for this Webmaster.

So, Next time you buy a site, Reg’ed a name, or start a link/keywords campaign maybe you should drop by MarketLeap and just add the URL. It keeps the records for you 😉

I tried to add an Image to this post, but I have yet to figure that out! haha. I will sooner or later. Hopefully it won’t take me as long as it took me to start my Blog…….

Robots.txt and why you should have one

Your site will get crawled one day. Probably not fast enough for you or your business, but unless you have a high ranking site to put a link back to the site you want crawled or pay to have a link put there, you are at the mercy of the Search Engines….Or rather your knowledge of Search!
So what to expect when spiders, robots, crawlers, and the unknown come creeping around your site looking for yummy content and links to gobble up. Most often you want every gizmo, whamming bot, and creepy crawler to ding and ping your site. BUT NOT Always…
Huh….?

Why not you ask?

Well, some bots will crawl and crawl and duplicate crawls, and chew the hell out of your bandwidth. That is one reason. Another may be to tell certain engines, scraper type stuff, to go away. I have seen bots head out and collect a mess of stuff from forums and websites and plop it right on their own site for content! That’s some nerve, eh! But some offer ways to block them out, they reference the text.txt file and tell you what code to add so it blocks them out. You should be prepared to allow and disallow whom-ever you want into your site.

The General rule is :
IF YOU DO NOT SPECIFY WHAT IS AND ISN’T ALLOWED TO BE INDEXEDIT’S ALL FREE GAME

Here are some basics to learn about your robot.txt file so you are somewhat prepared when then time comes.

—–

To exclude all robots from the entire server

User-agent: *
Disallow: /

To allow all robots complete access

User-agent: *
Disallow:

Or create an empty “/robots.txt” file.
To exclude all robots from part of the server

User-agent: *
Disallow: /cgi-bin/
Disallow: /images/
Disallow: /private/

To exclude a single robot

User-agent: BadBot
Disallow: /

To allow a single robot

User-agent: WebCrawler
Disallow:

To exclude every robot:

User-agent: *
Disallow: /

To exclude all files except one
T here is no “Allow” field. So, easist way to do this is to put all files to be disallowed into a separate directory, say “docs”, and leave the one file in the level above this directory:

User-agent: *
Disallow: /~my/docs/

Alternatively you can explicitly disallow all disallowed pages:

User-agent: *
Disallow: /~my/private.html
Disallow: /~my/diary.html
Disallow: /~my/addresses.html

Hope that helps out!

Storage – Why not paperless?

After looking around my office I see the mounting pile of paperwork. Most of this stuff is bills, invoices, product info, client info, etc. It seems pointless to keep storing this stuff in file cabinets and desk drawers. I mean come on