Followers

Powered by Blogger.
Sunday, October 31, 2010

WiFi peer-to-peer Direct is Go

Wi-Fi Alliance on Monday announced that its direct peer-to-peer networking version of WiFi, called WiFi Direct, is now available on several new WiFi devices. The Alliance is also announcing that it has begun the process of certifying devices for WiFi Direct compatibility.

The organization has already certified a handful of WiFi cards from Atheros, Broadcom, Intel, Ralink, Realtek, and Cisco, as well as the Cisco Aironet 1240 Series access points. These devices will also be used in the test suite to certify that future devices are compatible with the protocol. Any device passing the tests will be designated "Wi-Fi CERTIFIED Wi-Fi Direct."

"We designed Wi-Fi Direct to unleash a wide variety of applications which require device connections, but do not need the internet or even a traditional network," said Edgar Figueroa, CEO of the Wi-Fi Alliance, in a statement. The certification program will ensure compatibility with the standard across a range of devices. WiFi Direct devices can also connect to older "Wi-Fi CERTIFIED" devices for backward compatibility, so chances are your current equipment will work with newer devices using the protocol.

The new protocol allows compatible devices to connect in a peer-to-peer fashion, either one-to-one or in a group, to share data with each other. The Alliance noted that many users carry a lot of data with them on portable devices like smartphones; WiFi Direct will enable users to connect these devices with each other to share that data without the need for a local WiFi network.

Though ad-hoc WiFi and Bluetooth protocols serve similar purposes, WiFi direct offers the longest range and fastest throughput, and includes enterprise-class management and security features.

Windows 7and Server 2008 R2 Patch Detail

Microsoft has released a number of non-security updates, the majority of which are for the latest versions of its client and server operating systems. All the patches are available on Windows Update and the Microsoft Download Center and most will require a restart. With the exception of the last patch, they're all for Windows 7 or Windows Server 2008 R2.

Most of these updates will be rolled into Service Pack 1 for Windows 7 and Windows Server 2008 R2. Testers got the first Windows 7 SP1 beta build two months ago, but just today Microsoft sent out build 7601.17077 to selected PC and Technology Adoption Program partners, according to ZDNet.

The first patch (KB2028560) is vaguely described as one that delivers "new functionality and performance improvements for the graphics platform."

The second patch (KB2249857) describes an issue that occurs on 2TB+ hard disk drives. If the OS is configured to save dump files to a volume of such an HDD, some of the dump file is offset at a disk offset greater than the 2TB address, and Windows is either put into hibernation or crashes, volumes on the HDD may be corrupted, and data is lost. If the corrupted volumes include the system partition, the computer will no longer boot.

The third patch (KB982110) fixes a problem when running 32-bit applications on a 64-bit edition of Windows 7 or Windows Server 2008 R2. If the application uses the QueryPathOfRegTypeLib function to retrieve the path of a registered type library, it may return the path of the 64-bit version of the type library instead of the 32-bit one.

The fourth patch (KB2272691) is for a game, application, or firmware that is either installed incorrectly, causes system instability, or has primary functions that do not work correctly. The update will either prevent incompatible software from running (hard block with third-party manufacturer consent), notify the user that incompatible software is starting to run (soft block), or improve the software's functionality (update). It lists just a single application (Sensible Vision FastAccess) as being affected.

The fifth patch (KB2203330) solves a problem when installing a third-party application for the multiple transport Media Transfer Protocol (MTP) device or for the Windows Portable Device (WPD). Connecting an MTP or WPD device may result in an APC_INDEX_MISMATCH stop error message because of a race condition in the Compositebus.sys driver.

The last patch (KB979453) is for Windows Home Server and addresses five separate issues that were found since the release of WHS Power Pack 3.

Microsoft Windows Azure Future Concept

Microsoft unveiled its roadmap for the Windows Azure cloud computing platform. Moving beyond mere Infrastructure-as-a-Service (IaaS), the company is positioning Windows Azure as a Platform-as-a-Service offering: a comprehensive set of development tools, services, and management systems to allow developers to concentrate on creating available, scalable applications.

Over the next 12-18 months, a raft of new functionality will be rolled out to Windows Azure customers. These features will both make it easier to move existing applications into the cloud, and enhance the services available to cloud-hosted applications.

The company believes that putting applications into the cloud will often be a multistage process. Initially, the applications will run unmodified, which will remove patching and maintenance burdens, but not take advantage of any cloud-specific functionality. Over time, the applications will be updated and modified to start to take advantage of some of the additional capabilities that the Windows Azure platform has to offer.

Microsoft is building Windows Azure into an extremely complete cloud platform. Windows Azure currently takes quite a high-level approach to cloud services: applications have limited access to the underlying operating system, and software that requires Administrator installation isn't usable. Later in the year, Microsoft will enable Administrator-level access and Remote Desktop to Windows Azure instances.

For even more compatibility with existing applications, a new Virtual Machine role is being introduced. This will allow Windows Azure users to upload VHD virtual disks and run these virtual machines in the cloud. In a similar vein, Server Application Virtualization will allow server applications to be deployed to the cloud, without the need either to rewrite them or package them within a VHD. These features will be available in beta by the end of the year. Next year, virtual machine construction will be extended to allow the creation of virtual machines within the cloud. Initially, virtual machine roles will support Windows Server 2008 R2; in 2011, Windows Server 2003 and Windows Server 2008 with Service Pack 2 will also be supported.

Microsoft also has a lot to offer for applications that are cloud-aware. Over the past year, SQL Azure, the cloud-based SQL Server version, has moved closer to feature parity with its conventional version: this will continue with the introduction of SQL Azure Reporting, bringing SQL Server's reporting features to the cloud. New data syncing capabilities will also be introduced, allowing SQL Azure to replicate data with on-premises and mobile applications. Both of these will be available in previews by the end of the year, with final releases in 2011.

A range of new building-block technologies are also being introduced, including a caching component (similar to systems such as memcached) and a message bus (for reliable delivery of messages to and from other applications or mobile devices). A smaller, cheaper tier of Windows Azure instances is also being introduced, comparable to Amazon's recently-released Micro instances of EC2.

The breadth of services that Microsoft is building for the Windows Azure platform is substantial. Compared to Amazon's EC2 or Google's AppEngine, Windows Azure is becoming a far more complete platform: while EC2 and AppEngine both offer a few bits and pieces that are comparable (EC2 is particularly strong at hosting existing applications in custom virtual machines, for example), they aren't offering the same cohesive set of services.

Nonetheless, there are still areas that could be improved. The billing system is currently inflexible, and offers no ability for third parties to integrate with the existing Windows Azure billing. This means that a company wishing to offer its own building blocks for use by Windows Azure applications has to also implement its own monitoring and billing system. Windows Azure also has no built-in facility for automating job management and scaling.

Both of these gaps were pertinent to one of yesterday's demonstrations. Animation studio Pixar has developed a prototype version of its RenderMan rendering engine that works on Windows Azure. Traditionally, RenderMan was only accessible to the very largest animation studios, as it requires considerable investment in hardware to build render farms. By moving RenderMan to the cloud, smaller studios can use RenderMan for rendering jobs without having to maintain all those systems. It allows RenderMan to be sold as a service to anyone needing rendering capabilities.

Neither job management—choosing when to spin up extra instances, when to power them down, how to spread the different frames that need rendering between instances—nor billing are handled by Windows Azure itself. In both cases, Pixar needed to develop its own facilities. Microsoft recognizes that these are likely to be useful to a broad range of applications, and as such good candidates for a Microsoft-provided building block. But at the moment, they're not a part of the platform.

Microsoft CEO Steve Ballmer has said that Microsoft is "all in" with the cloud. The company is certainly working hard to make Windows Azure a better platform, and the commitment to the cloud extends beyond the Windows Azure team itself. Ars was told that all new development of online applications within Microsoft was using Windows Azure, and with few exceptions, existing online applications had migration plans that would be implemented in the next two years. The two notable exceptions are Hotmail and Bing, both of which already have their own, custom-built, dedicated server farms.

This internal commitment is no surprise given the history of the platform. Windows Azure was originally devised and developed to be an internal platform for application hosting. However, before there was any significant amount of internal usage, the company decided to offer it as a service to third parties. Now that the platform has matured, those internal applications are starting to migrate over. As such, this makes Windows Azure, in a sense, the opposite to both EC2 and AppEngine. Those products were a way for Amazon and Google to monetize their preexisting infrastructure investment—investment that had to be made simply to run the companies' day-to-day business.

With the newly announced features, there's no doubt that Windows Azure is shaping up to be a cloud computing platform that is both powerful and flexible. Microsoft is taking the market seriously, and its "all in" position seems to represent a genuine commitment to the cloud. What remains to be seen is whether this dedication will be matched by traditionally conservative businesses and developers, especially among small and medium enterprises. A move to the cloud represents a big change in thinking, and the new Windows Azure features will do nothing to assuage widespread fears such as a perceived loss of control. It is this change in mindset, not any technological issue, that represents the biggest barrier to widespread adoption of Windows Azure, and how Microsoft aims to tackle the problem is not yet clear.

Friday, October 29, 2010

Bassic Setting DNS Domain Server

The Domain Name System is the software that lets you have name to number mappings on your computers. The name decel.ecel.uwa.edu.au is the number 130.95.4.2 and vice versa. This is achieved through the DNS. The DNS is a heirarchy. There are a small number of root domain name servers that are responsible for tracking the top level domains and who is under them. The root domain servers between them know about all the people who have name servers that are authoritive for domains under the root.

Being authoritive means that if a server is asked about something in that domain, it can say with no ambiguity whether or not a given piece of information is true. For example. We have domains x.z and y.z. There are by definition authoritive name servers for both of these domains and we shall assume that the name server in both of these cases is a machine called nic.x.z and nic.y.z but that really makes no difference.

If someone asks nic.x.z whether there is a machine called a.x.z, then nic.x.z can authoritively say, yes or no because it is the authoritive name server for that domain. If someone asks nic.x.z whether there is a machine called a.y.z then nic.x.z asks nic.y.z whether such a machine exists (and caches this for future requests). It asks nic.y.z because nic.y.z is the authoritive name server for the domain y.z. The information about authoritive name servers is stored in the DNS itself and as long as you have a pointer to a name server who is more knowledgable than yourself then you are set.

When a change is made, it propogates slowly out through the internet to eventually reach all machines. The following was supplied by Mark Andrews Mark.Andrews@syd.dms.csiro.au.

If both the primary and all secondaries are up and talking when a zone update occurs and for the refresh period after the update the old data will live for max(refresh + mininum) average (refresh/2 +mininum) for the zone. New information will be available from all servers after refresh.

So with a refresh of 3 hours and a minimum of a day, you can expect everything to be working a day after it is changed. If you have a longer minimum, it may take a couple of days before things return to normal.

There is also a difference between a zone and a domain. The domain is the entire set of machines that are contained within an organisational domain name. For example, the domain uwa.edu.au contains all the machines at the University of Western Australia. A Zone is the area of the DNS for which a server is responsible. The University of Western Australia is a large organisation and trying to track all changes to machines at a central location would be difficult. The authoritive name server for the zone uwa.edu.au delegates the authority for the zone ecel.uwa.edu.au to decel.ecel.uwa.edu.au. Machine foo.ecel.uwa.edu.au is in the zone that decel is authoritive for. Machine bar.uwa.edu.au is in the zone that uniwa.uwa.edu.au is authoritive for.

2 Installing the DNS:

First I'll assume you already have a copy of the Domain Name Server software. It is probably called named or in.named depending on your flavour of unix. I never had to get a copy, but if anyone thinks that information should be here then by all means tell me and I'll put it in. If you intend on using the package called BIND, then you should be sure that you get version 4.9.x, which is the most recent version at this point in time.

For more information on the latest version of BIND you should take a look at Internet Software Consortium which sponsors the development of BIND. - Kavli

2.1 The Boot File:

First step is to create the file named.boot. This describes to named (we'll dispense with the in.named. Take them to be the same) where the information that it requires can be found. This file is normally found in /etc/named.boot and I personally tend to leave it there because then I know where to find it. If you don't want to leave it there but place it in a directory with the rest of your named files, then there is usually an option on named to specify the location of the boot file.

An alternative is of course to make a symbolic link from /etc/named.boot to the wanted directory. - Kavli

Your typical boot file will look like this if you are an unimportant leaf node and there are other name servers at your site.

directory /etc/namedfiles

cache . root.cache
primary ecel.uwa.edu.au ecel.uwa.domain
primary 0.0.127.in-addr.arpa 0.0.127.domain
primary 4.95.130.in-addr.arpa 4.95.130.domain
forwarders 130.95.128.1

Here is an alternative layout used by Christophe Wolfhugel <Christophe.Wolfhugel@grasp.insa-lyon.fr> He finds this easier because of the large number of domains he has. The structure is essentially the same, but the file names use the domain name rather than the IP subnet to describe the contents.

directory /usr/local/etc/bind
cache . p/root
forwarders 134.214.100.1 192.93.2.4
;
; Primary servers
;
primary fr.net p/fr.net
primary frmug.fr.net p/frmug.fr.net
primary 127.in-addr.arpa p/127
;
; Secondary servers
;
secondary ensta.fr 147.250.1.1 s/ensta.fr
secondary gatelink.fr.net 134.214.100.1 s/gatelink.fr.net
secondary insa-lyon.fr 134.214.100.1 s/insa-lyon.fr
secondary loesje.org 145.18.226.21 s/loesje.org
secondary nl.loesje.org 145.18.226.21 s/nl.loesje.org
secondary pcl.ac.uk 161.74.160.5 s/pcl.ac.uk
secondary univ-lyon1.fr 134.214.100.1 s/univ-lyon1.fr
secondary wmin.ac.uk 161.74.160.5 s/wmin.ac.uk
secondary westminster.ac.uk 161.74.160.5 s/westminster.ac.uk
;
;
; Secondary for addresses
;
secondary 74.161.in-addr.arpa 161.74.160.5 s/161.74
secondary 214.134.in-addr.arpa 134.214.100.1 s/134.214
secondary 250.147.in-addr.arpa 147.250.1.1 s/147.250
;
; Classes C
;
secondary 56.44.192.in-addr.arpa 147.250.1.1 s/192.44.56
secondary 57.44.192.in-addr.arpa 147.250.1.1 s/192.44.57

The lines in the named.boot file have the following meanings.

directory

This is the path that named will place in front of all file names referenced from here on. If no directory is specified, it looks for files relative to /etc.

cache

This is the information that named uses to get started. Named must know the IP number of some other name servers at least to get started. Information in the cache is treated differently depending on your version of named. Some versions of named use the information included in the cache permenantly and others retain but ignore the cache information once up and running.

Be sure you get an up-to-date cache-file. An obsolete cache file is a good source of problems. - Kavli

primary

This is one of the domains for which this machine is authorative for. You put the entire domain name in. You need forwards and reverse lookups. The first value is the domain to append to every name included in that file. (There are some exceptions, but they will be explained later) The name at the end of the line is the name of the file (relative to /etc of the directory if you specified one). The filename can have slashes in it to refer to subdirectories so if you have a lot of domains you may want to split it up.

BE VERY CAREFUL TO PUT THE NUMBERS BACK TO FRONT FOR THE REVERSE LOOK UP FILE. The example given above is for the subnet ecel.uwa.edu.au whose IP address is 130.95.4.*. The reverse name must be 4.95.130.in-addr.arpa. It must be backwards and it must end with .in-addr.arpa. If your reverse name lookups don't work, check this. If they still don't work, check this again.

forwarders

This is a list of IP numbers for forward requests for sites about which we are unsure. A good choice here is the name server which is authoritive for the zone above you.

secondary (This line is not in the example, but is worth mentioning.)

A secondary line indicates that you wish to be a secondary name server for this domain. You do not need to do this usually. All it does is help make the DNS more robust. You should have at least one secondary server for your site, but you do not need to be a secondary server for anyone else. You can by all means, but you don't need to be. If you want to be a secondary server for another domain, then place the line

secondary gu.uwa.edu.au 130.95.100.3 130.95.128.1 sec/gu.uwa.edu.au

in your named.boot. This will make your named try the servers on both of the machines specified to see if it can obtain the information about those domains. You can specify a number of IP addresses for the machines to query that probably depends on your machine. Your copy of named will upon startup go and query all the information it can get about the domain in question and remember it and act as though it were authoritive for that domain.

Next you will want to start creating the data files that contain the name definitions.

2.2 The cache file:

You should always use the latest cache file. The simplest way to do this is by using dig(1) this way:

dig @ns.internic.net . ns > root.cache

You can also get a copy of the cache file by ftp'ing FTP.RS.INTERNIC.NET.

An example of a cache file is located in Appendix A.

2.3 The Forward Mapping file:

The file ecel.uwa.edu.au. will be used for the example with a couple of machines left in for the purpose of the exercise. Here is a copy of what the file looks like with explanations following.

; Authoritative data for ecel.uwa.edu.au
;
@ IN SOA decel.ecel.uwa.edu.au. postmaster.ecel.uwa.edu.au. (
93071200 ; Serial (yymmddxx)
10800 ; Refresh 3 hours
3600 ; Retry 1 hour
3600000 ; Expire 1000 hours
86400 ) ; Minimum 24 hours
IN A 130.95.4.2
IN MX 100 decel
IN MX 150 uniwa.uwa.edu.au.
IN MX 200 relay1.uu.net.
IN MX 200 relay2.uu.net.

localhost IN A 127.0.0.1

decel IN A 130.95.4.2
IN HINFO SUN4/110 UNIX
IN MX 100 decel
IN MX 150 uniwa.uwa.edu.au.
IN MX 200 relay1.uu.net
IN MX 200 relay2.uu.net

gopher IN CNAME decel.ecel.uwa.edu.au.

accfin IN A 130.95.4.3
IN HINFO SUN4/110 UNIX
IN MX 100 decel
IN MX 150 uniwa.uwa.edu.au.
IN MX 200 relay1.uu.net
IN MX 200 relay2.uu.net

chris-mac IN A 130.95.4.5
IN HINFO MAC-II MACOS

The comment character is ';' so the first two lines are just comments indicating the contents of the file.

All values from here on have IN in them. This indicates that the value is an InterNet record. There are a couple of other types, but all you need concern yourself with is internet ones.

The IN type is default and can safely be omitted. It looks better without them I think.
- Kavli

The SOA record is the Start Of Authority record. It contains the information that other nameservers will learn about this domain and how to treat the information they are given about it. The '@' as the first character in the line indicates that you wish to define things about the domain for which this file is responsible. The domain name is found in the named.boot file in the corresponding line to this filename. All information listed refers to the most recent machine/domain name so all records from the '@' until 'localhost' refer to the '@'. The SOA record has 5 magic numbers. First magic number is the serial number. If you change the file, change the serial number. If you don't, no other name servers will update their information. The old information will sit around for a very long time.

Refresh is the time between refreshing information about the SOA. Retry is the frequency of retrying if an authorative server cannot be contacted. Expire is how long a secondary name server will keep information about a zone without successfully updating it or confirming that the data is up to date. This is to help the information withstand fairly lengthy downtimes of machines or connections in the network without having to recollect all the information. Minimum is the default time to live value handed out by a nameserver for all records in a zone without an explicit TTL value. This is how long the data will live after being handed out. The two pieces of information before the 5 magic numbers are the machine that is considered the origin of all of this information. Generally the machine that is running your named is a good one for here. The second is an email address for someone who can fix any problems that may occur with the DNS. Good ones here are postmaster, hostmaster or root. NOTE: You use dots and not '@' for the email address.

eg: root.decel.ecel.uwa.edu.au is correct
and
root@decel.ecel.uwa.edu.au is incorrect.

If your name contains a dot: E.g. Ronny.Kavli@mailhost.somedomain.there. - you must escape the dot -> Ronny\.Kavli.mailhost.somedomain.there. - But, if possible, you should create a mailalias instead. That way, related mail can go to more than one person. - Kavli

We now have an address to map ecel.uwa.edu.au to. The address is 130.95.4.2 which happens to be decel, our main machine. If you try to find an IP number for the domain ecel.uwa.edu.au it will get you the machine decel.ecel.uwa.edu.au's IP number. This is a nicety which means that people who have non-MX record mailers can still mail fred@ecel.uwa.edu.au and don't have to find the name of a machine name under the domain to mail.

Now we have a couple of MX records for the domain itself. The MX records specify where to send mail destined for the machine/domain that the MX record is for. In this case we would prefer if all mail for fred@ecel.uwa.edu.au is sent to decel.ecel.uwa.edu.au. If that does not work, we would like it to go to uniwa.uwa.edu.au because there are a number of machines that might have no idea how to get to us, but may be able to get to uniwa. And failing that, try the site relay1.uu.net. A small number indicates that this site should be tried first. The larger the number the further down the list of sites to try the site is. NOTE: Not all machines have mailers that pay attention to MX records. Some only pay attention to IP numbers, which is really stupid. All machines are required to have MX-capable Mail Transfer Agents (MTA) as there are many addresses that can only be reached via this means.

Do not point an MX record to a CNAME record. A lot of mailers don't handle this. Add another A-record to it instead, but let the reverse table point to the real name. In other words: Don't add a PTR record to it. - Kavli

There is an entry for localhost now. Note that this is somewhat of a kludge and should probably be handled far more elegantly. By placing localhost here, a machine comes into existance called localhost.ecel.uwa.edu.au. If you finger it, or telnet to it, you get your own machine, because the name lookup returns 127.0.0.1 which is the special case for your own machine. I have used a couple of different DNS packages. The old BSD one let you put things into the cache which would always work, but would not be exported to other nameservers. In the newer Sun one, they are left in the cache and are mostly ignored once named is up and running. This isn't a bad solution, its just not a good one.

Decel is the main machine in our domain. It has the IP number 130.95.4.2 and that is what this next line shows. It also has a HINFO entry. HINFO is Host Info which is meant to be some sort of an indication of what the machine is and what it runs. The values are two white space seperated values. First being the hardware and second being the software. HINFO is not compulsory, its just nice to have sometimes. We also have some MX records so that mail destined for decel has some other avenues before it bounces back to the sender if undeliverable.

It is a good idea to give all machines capable of handling mail an MX record because this can be cached on remote machines and will help to reduce the load on the network.

gopher.ecel.uwa.edu.au is the gopher server in our division. Now because we are cheapskates and don't want to go and splurge on a seperate machine just for handling gopher requests we have made it a CNAME to our main machine. While it may seem pointless it does have one main advantage. When we discover that our placing terrabytes of popular quicktime movies on our gopher server (no we haven't and we don't intend to) causes an unbearable load on our main machine, we can quickly move the CNAME to point at a new machine by changing the name mentioned in the CNAME. Then the slime of the world can continue to get their essential movies with a minimal interuption to the network. Other good CNAMEs to maintain are things like ftp, mailhost, netfind, archie, whois, and even dns (though the most obvious use for this fails). It also makes it easier for people to find these services in your domain.

Regarding CNAME from dns: NS records must point to A records. Same for MX records. - Kavli

We should probably start using WKS records for things like gopher and whois rather than making DNS names for them. The tools are not in wide circulation for this to work though. (Plus all those comments in many DNS implementation of "Not implemented" next to the WKS record)

WKS == Well Known Services. - The different services a host is providing
- Kavli

Finally we have a macintosh which belongs to my boss. All it needs is an IP number, and we have included the HINFO so that you can see that it is in fact a macII running a Mac System. To get the list of preferred values, you should get a copy of RFC 1340. It lists lots of useful information such as /etc/services values, ethernet manufacturer hardware addresses, HINFO defualts and many others. I will include the list as it stands at the moment, but if any RFC superceeds 1340, then it will have a more complete list. See Appendix B for that list.

NOTE: If Chris had a very high profile and wanted his mac to appear like a fully connected unix machine as far as internet services were concerned, he could simply place an MX record such as

IN MX 100 decel

after his machine and any mail sent to chris@chris-mac.ecel.uwa.edu.au would be automatically rerouted to decel.

2.4 The Reverse Mapping File

The reverse name lookup is handled in a most bizarre fashion. Well it all makes sense, but it is not immediately obvious.

All of the reverse name lookups are done by finding the PTR record associated with the name w.x.y.z.in-addr.arpa. So to find the name associated with the IP number 1.2.3.4, we look for information stored in the DNS under the name 4.3.2.1.in-addr.arpa. They are organised this way so that when you are allocated a B class subnet for example, you get all of the IP numbers in the domain 130.95. Now to turn that into a reverse name lookup domain, you have to invert the numbers or your registered domains will be spread all over the place. It is a mess and you need not understand the finer points of it all. All you need to know is that you put the reverse name lookup files back to front.

Here is the sample reverse name lookup files to go with our example.

0.0.127.in-addr.arpa
--
; Reverse mapping of domain names 0.0.127.in-addr.arpa
; Nobody pays attention to this, it is only so 127.0.0.1 -> localhost.
@ IN SOA decel.ecel.uwa.edu.au. postmaster.ecel.uwa.edu.au. (
91061801 ; Serial (yymmddxx)
10800 ; Refresh 3 hours
3600 ; Retry 1 hour
3600000 ; Expire 1000 hours
86400 ) ; Minimum 24 hours
;
1 IN PTR localhost.ecel.uwa.edu.au.
--

4.95.130.in-addr.arpa
--
; reverse mapping of domain names 4.95.130.in-addr.arpa
;
@ IN SOA decel.ecel.uwa.edu.au. postmaster.ecel.uwa.edu.au. (
92050300 ; Serial (yymmddxx format)
10800 ; Refresh 3hHours
3600 ; Retry 1 hour
3600000 ; Expire 1000 hours
86400 ) ; Minimum 24 hours
2 IN PTR decel.ecel.uwa.edu.au.
3 IN PTR accfin.ecel.uwa.edu.au.
5 IN PTR chris-mac.ecel.uwa.edu.au.
--

It is important to remember that you must have a second start of authority record for the reverse name lookups. Each reverse name lookup file must have its own SOA record. The reverse name lookup on the 127 domain is debatable seeing as there is likely to be only one number in the file and it is blatantly obvious what it is going to map to.

In general: Each primary file pointed to in named.boot should have one - and only one - SOA record.
- Kavli

The SOA details are the same as in the forward mapping.

Each of the numbers listed down the left hand side indicates that the line contains information for that number of the subnet. Each of the subnets must be the more significant digits. eg the 130.95.4 of an IP number 130.95.4.2 is implicit for all numbers mentioned in the file.

The PTR must point to a machine that can be found in the DNS. If the name is not in the DNS, some versions of named just bomb out at this point.

Reverse name lookups are not compulsory, but nice to have. It means that when people log into machines, they get names indicating where they are logged in from. It makes it easier for you to spot things that are wrong and it is far less cryptic than having lots of numbers everywhere. Also if you do not have a name for your machine, some brain dead protocols such as talk will not allow you to connect.

Since I had this I had one suggestion of an alternative way to do the localhost entry. I think it is a matter of personal opinion so I'll include it here in case anyone things that this is a more appropriate method.

The following is courtesy of jep@convex.nl (JEP de Bie)

The way I did it was:

1) add in /etc/named.boot:

primary . localhost primary 127.in-addr.ARPA. IP127

(Craig: It has been suggested by Mark Andrews that this is a bad practice particularly if you have upgraded to Bind 4.9. You also run the risk of polluting the root name servers. This comes down to a battle of idealogy and practicality. Think twice before declaring yourself authorative for the root domain.)

So I not only declare myself (falsely? - probably, but nobody is going to listen anyway most likely [CPR]:-) athorative in the 127.in-addr.ARPA domain but also in the . (root) domain.

2) the file localhost has:

$ORIGIN .
localhost IN A 127.0.0.1

3) and the file IP127:

$ORIGIN 127.in-addr.ARPA.
1.0.0 IN PTR localhost.

4) and I have in my own domain file (convex.nl) the line:

$ORIGIN convex.nl.
localhost IN CNAME localhost.

The advantage (elegancy?) is that a query (A) of localhost. gives the reverse of the query of 1.0.0.127.in-addr.ARPA. And it also shows that localhost.convex.nl is only a nickname to something more absolute. (While the notion of localhost is of course relative :-)).

And I also think there is a subtle difference between the lines

primary 127.in-addr.ARPA. IP127
and
primary 0.0.127.in-addr.ARPA. 4.95.130.domain
=============
JEP de Bie
jep@convex.nl
=============


3 Delegating authority for domains within your domain

When you start having a very big domain that can be broken into logical and seperate entities that can look after their own DNS information, you will probably want to do this. Maintain a central area for the things that everyone needs to see and delegate the authority for the other parts of the organisation so that they can manage themselves.

Another essential piece of information is that every domain that exists must have its NS records associated with it. These NS records denote the name servers that are queried for information about that zone. For your zone to be recognised by the outside world, the server responsible for the zone above you must have created a NS record for your machine in your domain. For example, putting the computer club onto the network and giving them control over their own part of the domain space we have the following:

The machine authorative for gu.uwa.edu.au is mackerel and the machine authorative for ucc.gu.uwa.edu.au is marlin.

in mackerel's data for gu.uwa.edu.au we have the following

@ IN SOA ...
IN A 130.95.100.3
IN MX mackerel.gu.uwa.edu.au.
IN MX uniwa.uwa.edu.au.

marlin IN A 130.95.100.4

ucc IN NS marlin.gu.uwa.edu.au.
IN NS mackerel.gu.uwa.edu.au.

Marlin is also given an IP in our domain as a convenience. If they blow up their name serving there is less that can go wrong because people can still see that machine which is a start. You could place "marlin.ucc" in the first column and leave the machine totally inside the ucc domain as well.

The second NS line is because mackerel will be acting as secondary name server for the ucc.gu domain. Do not include this line if you are not authorative for the information included in the sub-domain.

4 Troubleshooting your named:

4.1 Named doesn't work! What is wrong?

Step 1: Run nslookup and see what nameserver it tries to connect you to. If nslookup connects you to the wrong nameserver, create a /etc/resolv.conf file that points your machine at the correct nameserver. If there is no resolv.conf file, the the resolver uses the nameserver on the local machine.

Step 2: Make sure that named is actually running.

Step 3: Restart named and see if you get any error messages on the console and in also check /usr/adm/messages.

Step 4: If named is running, nslookup connects to the appropriate nameserver and nslookup can answer simple questions, but other programs such as 'ping' do not work with names, then you need to install resolv+ most likely.

4.2 Local has noticed change, but nobody else has new info

I changed my named database and my local machine has noticed, but nobody else has the new information?

Change the serial number in the SOA for any domains that you modified and restart named. Wait an hour and check again. The information propogates out. It won't change immediately.

4.3 I can see their info, but they can't see mine

My local machine knows about all the name server information, but no other sites know about me?

Find an upstream nameserver (one that has an SOA for something in your domain) and ask them to be a secondary name server for you. eg if you are ecel.uwa.edu.au, ask someone who has an SOA for the domain uwa.edu.au. Get NS records (and glue) added to your parent zone for your zone. This is called delegating. It should be done formally like this or you will get inconsistant answers out of the DNS. ALL NAMSERVERS FOR YOUR ZONE SHOULD BE LISTED IN THIS MANNER.

4.4 Forward domain works, but not backwards

My forward domain names work, but the backward names do not?

Make sure the numbers are back to front and have the in-addr.arpa on the end.

Make sure your reverse zone is registered. For Class C nets this can be done by mailing to hostmaster@internic.net. For class A & B nets make sure that you are registeres with the primary for your net and that the net itself is registered with hostmaster@internic.net.

5 How to get useful information from nslookup:

Nslookup is a very useful program but I'm sure there are less than 20 people worldwide who know how to use it to its full usefulness. I'm most certainly not one of them. If you don't like using nslookup, there is at least one other program called dig, that has most/all(?) of the functionality of nslookup and is a hell of a lot easier to use.

I won't go into dig much here except to say that it is a lot easier to get this information out of. I won't bother because nslookup ships with almost all machines that come with network software.

To run nslookup, you usually just type nslookup. It will tell you the server it connects to. You can specify a different server if you want. This is useful when you want to tell if your named information is consistent with other servers.

5.1 Getting name to number mappings

Type the name of the machine. Typing 'decel' is enough if the machine is local.

(Once you have run nslookup successfully)

> decel
Server: ecel.uwa.edu.au
Address: 130.95.4.2

Name: decel.ecel.uwa.edu.au
Address: 130.95.4.2

>

One curious quirk of some name resolvers is that if you type a machine name, they will try a number of permutations. For example if my machine is in the domain ecel.uwa.edu.au and I try to find a machine called fred, the resolver will try the following.

fred.ecel.uwa.edu.au.
fred.uwa.edu.au.
fred.edu.au.
fred.au.
fred.

This can be useful, but more often than not, you would simply prefer a good way to make aliases for machines that are commonly referenced. If you are running resolv+, you should just be able to put common machines into the host file.

DIG: dig <machine name>

5.2 Getting number to name mappings

Nslookup defaults to finding you the Address of the name specified. For reverse lookups you already have the address and you want to find the name that goes with it. If you read and understood the bit above where it describes how to create the number to name mapping file, you would guess that you need to find the PTR record instead of the A record. So you do the following.

> set type=ptr
> 2.4.95.130.in-addr.arpa
Server: decel.ecel.uwa.edu.au
Address: 130.95.4.2

2.4.95.130.in-addr.arpa host name = decel.ecel.uwa.edu.au
>

nslookup tells you that the ptr for the machine name 2.4.95.130.in-addr.arpa points to the host decel.ecel.uwa.edu.au.

DIG: dig -x <machine number>

5.3 Finding where mail goes when a machine has no IP number

When a machine is not IP connected, it needs to specify to the world, where to send the mail so that it can dial up and collect it every now and then. This is accomplished by setting up an MX record for the site and not giving it an IP number. To get the information out of nslookup as to where the mail goes, do the following.

> set type=mx
> dialix.oz.au
Server: decel.ecel.uwa.oz.au
Address: 130.95.4.2

Non-authoritative answer:
dialix.oz.au preference = 100, mail exchanger = uniwa.uwa.OZ.AU
dialix.oz.au preference = 200, mail exchanger = munnari.OZ.AU
Authoritative answers can be found from:
uniwa.uwa.OZ.AU inet address = 130.95.128.1
munnari.OZ.AU inet address = 128.250.1.21
munnari.OZ.AU inet address = 192.43.207.1
mulga.cs.mu.OZ.AU inet address = 128.250.35.21
mulga.cs.mu.OZ.AU inet address = 192.43.207.2
dmssyd.syd.dms.CSIRO.AU inet address = 130.155.16.1
ns.UU.NET inet address = 137.39.1.3

You tell nslookup that you want to search for mx records and then you give it the name of the machine. It tells you the preference for the mail (small means more preferable), and who the mail should be sent to. It also includes sites that are authorative (have this name in their named database files) for this MX record. There are multiple sites as a backup. As can be seen, our local public internet access company dialix would like all of their mail to be sent to uniwa, where they collect it from. If uniwa is not up, send it to munnari and munnari will get it to uniwa eventually.

NOTE: For historical reasons Australia used to be .oz which was changed to .oz.au to move to the ISO standard extensions upon the advent of IP. We are now moving to a more normal heirarchy which is where the .edu.au comes from. Pity, I liked having oz.

DIG: dig <zone> mx

5.4 Getting a list of machines in a domain from nslookup

Find a server that is authorative for the domain or just generally all knowing. To find a good server, find all the SOA records for a given domain. To do this, you set type=soa and enter the domain just like in the two previous examples.

Once you have a server type

> ls gu.uwa.edu.au.
[uniwa.uwa.edu.au]
Host or domain name Internet address
gu server = mackerel.gu.uwa.edu.au
gu server = uniwa.uwa.edu.au
gu 130.95.100.3
snuffle-upagus 130.95.100.131
mullet 130.95.100.2
mackerel 130.95.100.3
marlin 130.95.100.4
gugate 130.95.100.1
gugate 130.95.100.129
helpdesk 130.95.100.180
lan 130.95.100.0
big-bird 130.95.100.130

To get a list of all the machines in the domain.

If you wanted to find a list of all of the MX records for the domain, you can put a -m flag in the ls command.

> ls -m gu.uwa.edu.au.
[uniwa.uwa.edu.au]
Host or domain name Metric Host
gu 100 mackerel.gu.uwa.edu.au
gu 200 uniwa.uwa.edu.au

This only works for a limited selection of the different types.

DIG: dig axfr <zone> @<server>

6 Appendicies

6.2 Appendix B

An Excerpt from
RFC 1340 Assigned Numbers July 1992


MACHINE NAMES

These are the Official Machine Names as they appear in the Domain Name
System HINFO records and the NIC Host Table. Their use is described in
RFC-952 [53].

A machine name or CPU type may be up to 40 characters taken from the
set of uppercase letters, digits, and the two punctuation characters
hyphen and slash. It must start with a letter, and end with a letter
or digit.

ALTO DEC-1080
ALTOS-6800 DEC-1090
AMDAHL-V7 DEC-1090B
APOLLO DEC-1090T
ATARI-104ST DEC-2020T
ATT-3B1 DEC-2040
ATT-3B2 DEC-2040T
ATT-3B20 DEC-2050T
ATT-7300 DEC-2060
BBN-C/60 DEC-2060T
BURROUGHS-B/29 DEC-2065
BURROUGHS-B/4800 DEC-FALCON
BUTTERFLY DEC-KS10
C/30 DEC-VAX-11730
C/70 DORADO
CADLINC DPS8/70M
CADR ELXSI-6400
CDC-170 EVEREX-386
CDC-170/750 FOONLY-F2
CDC-173 FOONLY-F3
CELERITY-1200 FOONLY-F4
CLUB-386 GOULD
COMPAQ-386/20 GOULD-6050
COMTEN-3690 GOULD-6080
CP8040 GOULD-9050
CRAY-1 GOULD-9080
CRAY-X/MP H-316
CRAY-2 H-60/68
CTIWS-117 H-68
DANDELION H-68/80
DEC-10 H-89
DEC-1050 HONEYWELL-DPS-6
DEC-1077 HONEYWELL-DPS-8/70
HP3000 ONYX-Z8000
HP3000/64 PDP-11
IBM-158 PDP-11/3
IBM-360/67 PDP-11/23
IBM-370/3033 PDP-11/24
IBM-3081 PDP-11/34
IBM-3084QX PDP-11/40
IBM-3101 PDP-11/44
IBM-4331 PDP-11/45
IBM-4341 PDP-11/50
IBM-4361 PDP-11/70
IBM-4381 PDP-11/73
IBM-4956 PE-7/32
IBM-6152 PE-3205
IBM-PC PERQ
IBM-PC/AT PLEXUS-P/60
IBM-PC/RT PLI
IBM-PC/XT PLURIBUS
IBM-SERIES/1 PRIME-2350
IMAGEN PRIME-2450
IMAGEN-8/300 PRIME-2755
IMSAI PRIME-9655
INTEGRATED-SOLUTIONS PRIME-9755
INTEGRATED-SOLUTIONS-68K PRIME-9955II
INTEGRATED-SOLUTIONS-CREATOR PRIME-2250
INTEGRATED-SOLUTIONS-CREATOR-8 PRIME-2655
INTEL-386 PRIME-9955
INTEL-IPSC PRIME-9950
IS-1 PRIME-9650
IS-68010 PRIME-9750
LMI PRIME-2250
LSI-11 PRIME-750
LSI-11/2 PRIME-850
LSI-11/23 PRIME-550II
LSI-11/73 PYRAMID-90
M68000 PYRAMID-90MX
MAC-II PYRAMID-90X
MASSCOMP RIDGE
MC500 RIDGE-32
MC68000 RIDGE-32C
MICROPORT ROLM-1666
MICROVAX S1-MKIIA
MICROVAX-I SMI
MV/8000 SEQUENT-BALANCE-8000
NAS3-5 SIEMENS
NCR-COMTEN-3690 SILICON-GRAPHICS
NEXT/N1000-316 SILICON-GRAPHICS-IRIS
NOW SGI-IRIS-2400
SGI-IRIS-2500 SUN-3/50
SGI-IRIS-3010 SUN-3/60
SGI-IRIS-3020 SUN-3/75
SGI-IRIS-3030 SUN-3/80
SGI-IRIS-3110 SUN-3/110
SGI-IRIS-3115 SUN-3/140
SGI-IRIS-3120 SUN-3/150
SGI-IRIS-3130 SUN-3/160
SGI-IRIS-4D/20 SUN-3/180
SGI-IRIS-4D/20G SUN-3/200
SGI-IRIS-4D/25 SUN-3/260
SGI-IRIS-4D/25G SUN-3/280
SGI-IRIS-4D/25S SUN-3/470
SGI-IRIS-4D/50 SUN-3/480
SGI-IRIS-4D/50G SUN-4/60
SGI-IRIS-4D/50GT SUN-4/110
SGI-IRIS-4D/60 SUN-4/150
SGI-IRIS-4D/60G SUN-4/200
SGI-IRIS-4D/60T SUN-4/260
SGI-IRIS-4D/60GT SUN-4/280
SGI-IRIS-4D/70 SUN-4/330
SGI-IRIS-4D/70G SUN-4/370
SGI-IRIS-4D/70GT SUN-4/390
SGI-IRIS-4D/80GT SUN-50
SGI-IRIS-4D/80S SUN-100
SGI-IRIS-4D/120GTX SUN-120
SGI-IRIS-4D/120S SUN-130
SGI-IRIS-4D/210GTX SUN-150
SGI-IRIS-4D/210S SUN-170
SGI-IRIS-4D/220GTX SUN-386i/250
SGI-IRIS-4D/220S SUN-68000
SGI-IRIS-4D/240GTX SYMBOLICS-3600
SGI-IRIS-4D/240S SYMBOLICS-3670
SGI-IRIS-4D/280GTX SYMMETRIC-375
SGI-IRIS-4D/280S SYMULT
SGI-IRIS-CS/12 TANDEM-TXP
SGI-IRIS-4SERVER-8 TANDY-6000
SPERRY-DCP/10 TEK-6130
SUN TI-EXPLORER
SUN-2 TP-4000
SUN-2/50 TRS-80
SUN-2/100 UNIVAC-1100
SUN-2/120 UNIVAC-1100/60
SUN-2/130 UNIVAC-1100/62
SUN-2/140 UNIVAC-1100/63
SUN-2/150 UNIVAC-1100/64
SUN-2/160 UNIVAC-1100/70
SUN-2/170 UNIVAC-1160
UNKNOWN
VAX-11/725
VAX-11/730
VAX-11/750
VAX-11/780
VAX-11/785
VAX-11/790
VAX-11/8600
VAX-8600
WANG-PC002
WANG-VS100
WANG-VS400
WYSE-386
XEROX-1108
XEROX-8010
ZENITH-148

SYSTEM NAMES

These are the Official System Names as they appear in the Domain Name
System HINFO records and the NIC Host Table. Their use is described
in RFC-952 [53].

A system name may be up to 40 characters taken from the set of upper-
case letters, digits, and the three punctuation characters hyphen,
period, and slash. It must start with a letter, and end with a
letter or digit.

AEGIS LISP SUN OS 3.5
APOLLO LISPM SUN OS 4.0
AIX/370 LOCUS SWIFT
AIX-PS/2 MACOS TAC
BS-2000 MINOS TANDEM
CEDAR MOS TENEX
CGW MPE5 TOPS10
CHORUS MSDOS TOPS20
CHRYSALIS MULTICS TOS
CMOS MUSIC TP3010
CMS MUSIC/SP TRSDOS
COS MVS ULTRIX
CPIX MVS/SP UNIX
CTOS NEXUS UNIX-BSD
CTSS NMS UNIX-V1AT
DCN NONSTOP UNIX-V
DDNOS NOS-2 UNIX-V.1
DOMAIN NTOS UNIX-V.2
DOS OS/DDP UNIX-V.3
EDX OS/2 UNIX-PC
ELF OS4 UNKNOWN
EMBOS OS86 UT2D
EMMOS OSX V
EPOS PCDOS VM
FOONEX PERQ/OS VM/370
FUZZ PLI VM/CMS
GCOS PSDOS/MIT VM/SP
GPOS PRIMOS VMS
HDOS RMX/RDOS VMS/EUNICE
IMAGEN ROS VRTX
INTERCOM RSX11M WAITS
IMPRESS RTE-A WANG
INTERLISP SATOPS WIN32
IOS SCO-XENIX/386 X11R3
IRIX SCS XDE
ISI-68020 SIMP XENIX
ITS SUN


6.3 Appendix C

Appendix C Installing DNS on a Sun when running NIS

====================
2) How to get DNS to be used when running NIS ?

First setup the appropriate /etc/resolv.conf file.
Something like this should do the "trick".

;
; Data file for a client.
;
domain local domain
nameserver address of primary domain nameserver
nameserver address of secondary domain nameserver

where: "local domain" is the domain part of the hostnames.
For example, if your hostname is "thor.ece.uc.edu"
your "local domain" is "ece.uc.edu".

You will need to put a copy of this resolv.conf on
all NIS(YP) servers including slaves.

Under SunOS 4.1 and greater, change the "B=" at the top
of the /var/yp/Makefile to "B=-b" and setup NIS in the
usual fashion.

You will need reboot or restart ypserv for these changes
to take affect.

Under 4.0.x, edit the Makefile or apply the following "diff":

*** Makefile.orig Wed Jan 10 13:22:11 1990
--- Makefile Wed Jan 10 13:22:01 1990
***************
*** 63 ****
! | $(MAKEDBM) - $(YPDBDIR)/$(DOM)/hosts.byname; \
--- 63 ----
! | $(MAKEDBM) -b - $(YPDBDIR)/$(DOM)/hosts.byname; \
***************
*** 66 ****
! | $(MAKEDBM) - $(YPDBDIR)/$(DOM)/hosts.byaddr; \
--- 66 ----
! | $(MAKEDBM) -b - $(YPDBDIR)/$(DOM)/hosts.byaddr; \
====================

--
Craig Richmond. Computer Officer - Dept of Economics (morning) 380 3860
University of Western Australia Dept of Education (afternoon) 2368
craig@ecel.uwa.edu.au Dvorak Keyboards RULE! "Messes are only acceptable
if users make them. Applications aren't allowed this freedom" I.M.VI 2-4


Wednesday, October 27, 2010

High Paying Keyword (HPK) Management

high paying keyword imageHow to Find High Paying Keywords in Google Adwords to Increase Adsense Revenue. Google Adsense Ad type currently displaying on your page is solely depends on the keyword in your content, as every adsense users know that the ads are displaying on the page are automatically comes in contrast of your blog title, url and content. so if you use high paying keywords then you will have a better chance to earn more, if you display higher CPC (Cost Per Click) keywords then when user clicks on that ad you will get higher benefit.

Keywords mean everything in the internet marketing world. They can make the change between striking it rich, or costing money. Having a good list of strong and high bringing in keywords to use for your business will give you a terrific advantage when it comes to your online marketing efforts.

Keyword management has become basically popular lately when it allows you to pinpoint all of the keywords that you provided use if you want to make a lot of money online. The first occurence you provided do is probe out your competitions keywords. This can give you some great ideas on what is working for them, and replicate it. It are able to assist you bypass

There are those of the additional work you would have to do in finding profitable keywords for your boom type. Second, think up a good deal of keywords on your own to add to the list of your competitors keywords. Make positive you bet of ways that different languages spell certain words, such as the difference between North American English and British English.

Ask family and friends how searching terms they would use if they were seem to be for your business. This will give you an outside view on how people will be search for. I know that this is a lot of work, but luckily there is an straightforward way. Keyword management software such as Keyword Elite can help you find all of the niche keywords in your string of business that will give you the proper profits the quickest.

Doing the supervised yourself leaves a lot of room for error and you might end up using keywords that have too much competition, rendering your marketing efforts useless. Finding such exorbitant using keywords will put you a step above your competition, and fill your wallet.so now i will demonstrate what to do for finding high paying keywords with the use of Google Adwords and Earn more money with Google Adsense.

Steps to find High Paying Keywords in Google:

Step 1. Open Google AdWord tools

Step 2. Enter Keywords (Enter one keyword or phrase per line) and Click on the Get Keyword ideas Button.

Step 3. Now Click on the Choose columns to display Dropdown and Select the Show Estimated Average CPC

Step 4. Change the Dropdown value in Calculate estimates using a different maximum CPC bid to US Dollars (USD $)

Find the Highest CPC Keyword from the list and use them in your post content. you will get higher paying ads.
Tuesday, October 26, 2010

DHCP (Dynamic Host Configuration Protocol)

DHCP (Dynamic Host Configuration Protocol) adalah protokol yang berbasis arsitektur client/server yang dipakai untuk memudahkan pengalokasian alamat IP dalam satu jaringan. Sebuah jaringan lokal yang tidak menggunakan DHCP harus memberikan alamat IP kepada semua komputer secara manual. Jika DHCP dipasang di jaringan lokal, maka semua komputer yang tersambung di jaringan akan mendapatkan alamat IP secara otomatis dari server DHCP. Selain alamat IP, banyak parameter jaringan yang dapat diberikan oleh DHCP, seperti default gateway dan DNS server.

Cara Kerja DHCP

Karena DHCP merupakan sebuah protokol yang menggunakan arsitektur client/server, maka dalam DHCP terdapat dua pihak yang terlibat, yakni DHCP Server dan DHCP Client.

* DHCP server merupakan sebuah mesin yang menjalankan layanan yang dapat “menyewakan” alamat IP dan informasi TCP/IP lainnya kepada semua klien yang memintanya. Beberapa sistem operasi jaringan seperti Windows NT Server, Windows 2000 Server, Windows Server 2003, atau GNU/Linux memiliki layanan seperti ini.
* DHCP client merupakan mesin klien yang menjalankan perangkat lunak klien DHCP yang memungkinkan mereka untuk dapat berkomunikasi dengan DHCP Server. Sebagian besar sistem operasi klien jaringan (Windows NT Workstation, Windows 2000 Professional, Windows XP, Windows Vista, atau GNU/Linux) memiliki perangkat lunak seperti ini.

DHCP server umumnya memiliki sekumpulan alamat yang diizinkan untuk didistribusikan kepada klien, yang disebut sebagai DHCP Pool. Setiap klien kemudian akan menyewa alamat IP dari DHCP Pool ini untuk waktu yang ditentukan oleh DHCP, biasanya hingga beberapa hari. Manakala waktu penyewaan alamat IP tersebut habis masanya, klien akan meminta kepada server untuk memberikan alamat IP yang baru atau memperpanjangnya.

DHCP Client akan mencoba untuk mendapatkan “penyewaan” alamat IP dari sebuah DHCP server dalam proses empat langkah berikut:

1. DHCPDISCOVER: DHCP client akan menyebarkan request secara broadcast untuk mencari DHCP Server yang aktif.
2. DHCPOFFER: Setelah DHCP Server mendengar broadcast dari DHCP Client, DHCP server kemudian menawarkan sebuah alamat kepada DHCP client.
3. DHCPREQUEST: Client meminta DCHP server untuk menyewakan alamat IP dari salah satu alamat yang tersedia dalam DHCP Pool pada DHCP Server yang bersangkutan.
4. DHCPACK: DHCP server akan merespons permintaan dari klien dengan mengirimkan paket acknowledgment. Kemudian, DHCP Server akan menetapkan sebuah alamat (dan konfigurasi TCP/IP lainnya) kepada klien, dan memperbarui basis data database miliknya. Klien selanjutnya akan memulai proses binding dengan tumpukan protokol TCP/IP dan karena telah memiliki alamat IP, klien pun dapat memulai komunikasi jaringan.

Konsep DHCP yaitu melayani permintaan dari pada Clientnya, meminta IP untuk disebarkan ke client2 secara otomatis. >DHCP ini didesain untuk melayani network yang besar dan konfigurasi TCP/IP yang kompleks….
Hal ini berlaku jika komputer tersebut menggunakan setting IP dengan DHCP atau di Windows mengaktifkan pilihan “Obtain IP Address Automatically” tau kan maksud gw..

Trus Jika terdapat sebuah DHCP Server dengan range IP 192.168.1.100 sampai dengan 192.168.10.200, maka setiap komputer yang terhubung pada jaringan tersebut dan mengaktifkan penggunaan DHCP, maka DHCP Server akan memberikan alamat IP pada range diatas yaitu antara 100 – 200. ngarti kan…

kalo pada jaringan tersebut terdapat sebuah komputer dengan IP Statik dan masih dalam range dari IP DHCP Server maka DHCP Server tidak akan menggunakan IP tersebut untuk diberikan kepada pengguna DHCP yang lain..

* IP Address berarti= network addres = Host address

PROSES DHCP :

1. Indentifikasi DHCP Server
2. MeminTa IP
3. Menerima IP
4. Memutuskan Untuk Menggunakan IP tersebut…

DHCP menggunakan konsep DHCP relay agent ( tak henti2 ) nyambung terus walaupun mati, DHCP relay agent adalah sebuah host yang melanjutkan paket DHCP antara
Client dan server. Relay agent digunakan untuk melanjutkan permintaan dan balasan
antara client dan server yang mereka tidak dalam physical subnet yang sama…

Konfigurasi DHCP
Database DHCP server diorganisasikan seperti pohon. Akar dari pohon adalah alamat pool untuk network alami, ranting-rantingnya dalah alamat pool subnetwork, dan daunnya secara manual mengikat client.
Subnetwork mewarisi parameter network dan client mewarisi subnetwork parameter. Oleh karena itu, kebanyakan parameter, seperti nama domain, harus di konfigurasi pada level tertinggi (network atau subnetwork) dari pohon…..pohon jambu, pohon beringin dan lain lain….

Kelemahan DHCP :

kelemahan DHCP ini diantaranya terhubungnya komputer yang tidak diinginkan masuk pada jaringan komputer. Sehingga komputer atau laptop yang tidak diinginkan tersebut dapat mengakses sumber daya yang ada pada jaringan.

untuk menghindari hal tersebut, setiap klient komputer yang ingin terhubung ke jaringan harus di identifikasi ke absahannya. dengan menerapkan MAC address yang dimiliki oleh setiap NIC, dapat diketahui keabsahan komputer tersebut. sehingga jika ada MAC address yang tidak tidak terdaftar di komputer DHCP server, maka komputer tersebut tidak dapat mengakses jaringan…
Thursday, October 21, 2010

IPv6 ( Internet Protocol v6 ) Technology

about ipv6 technologyIPv6 are next generation of IPv4. The addressing scheme used for the TCP/IP protocols is called IP version 4 (IPv4). This scheme uses a 32-bit binary number to identify networks and end stations. This 32-bit scheme yields about 4 billion addresses, but because of the dotted decimal system (which breaks the number into four sections of 8 bits each) and other considerations, only about 250 million usable addresses exist. When the scheme was originally developed in the 1980s, no one ever thought that address would become scarce. The advent of the Internet, however, along with the trend of making many devices Internet-compatible (which means they need an address), such as cell phones and PDAs, makes running out of IPv4 addresses a certainty.


Network Address Translation (NAT) and Port Address Translation (PAT) were developed as solutions to the diminishing availability of IP addresses. NAT and PAT enable a company or user to share a single (or a few) assigned IP addresses among several private addresses that are not bound by an address authority. Although these schemes preserve address space and provide anonymity, the benefits come at the cost of individuality, which goes against the very reason for networking in the first place, which is to allow peer-to-peer collaborations through shared applications.

IP addressing scheme version 6 (IPv6) not only provides an answer to the problem of depleting address space, it allows for the restoration of a true end-to-end model, where hosts can connect to each other unobstructed and with greater flexibility. The key elements in IPv6 are to allow for each host to have a unique global IP address, to maintain connectivity even when in motion, and to natively secure host communications.

IPv6 Addresses The 128-bit address used in IPv6 allows for a greater number of addresses and subnets (enough space for 1015 end points - 340,282,366,920,938,463,463, 374,607,431,768,211,456 total).

IPv6 was designed to give every user multiple global addresses that can be used for a variety of devices, including cell phones, PDAs, IP-enabled vehicles, and consumer electronics. In addition to providing more address space, IPv6 has the following advantages over IPv4:

* Easier address management and delegation
* Easy address autoconfiguration
* Embedded IPSec (encrypted security)
* Optimized routing
* Duplicate Address Detection (DAD)

IPv6 Notation
This figure demonstrates the notation and shortcuts for IPv6 addresses.

128 bits are expressed as 8 fields of 16 bits in Hex notation:
2031 :0000:130F:0000:0000:09C0:876A:130B

An IPv6 address uses the first 64 bits in the address for the network ID and the second 64 bits for the host ID. The network ID is separated into "prefix" chunks. This figure shows the address hierarchy.

IPv6 Autoconfiguration
IPv4 deployments use one of two methods to assign IP addresses to a host: static assignment (which is management intensive) or DHCP/BOOTP, which automatically assigns IP addresses to hosts upon booting onto the network.

IPv6 provides a feature called stateless auto con figuration that is similar to Dynamic Host Configuration Protocol (DHCP). Using stateless configuration, any router interface that has an IPv6 address assigned to it becomes the "provider" of IP addresses on the network to which it’s attached. Safeguards are built into IPv6 that prevent duplicate addresses. This feature is called Duplicate Address Detection (DAD).

IPv6 Security
IPv6 has embedded support for IPSec. The host operating system (OS) can configure an IPSec tunnel between the host and any other host that has IPv6 support.

NAT and PAT
Although Network Address Translation (NAT) causes problems with peer-to-peer collaboration, it is still widely used, particularly in homes and small offices.

* Static NAT uses a one-to-one private-to-public address translation.
* Dynamic NAT matches private addresses to a pool of public addresses on an as-needed basis. The address translation is still one to one.

Port Address Translation (PAT) is a form of dynamic address translation that uses a many private addresses to few or one public address. This is referred to as overloading. It is accomplished by assigning port numbers.
Wednesday, October 20, 2010

Mozilla Firefox : how to create own browser skin


Creating a Skin for Firefox, there are three things you need to know: how to edit images, how to extract zip files, and how to modify CSS. Firefox uses standard GIF, PNG, and JPEG images for the buttons and CSS to style everything else in the interface.

A skin does not totally change the interface; instead, it just defines how the interface looks. You can't change what happens when the user right clicks on an image, but you can change the look of the right click menu (Make it blue with pink polka dots, for example). If you want to change the functionality of Firefox, you'll have to look into modifying the chrome, which is beyond the scope of this document.

Download the latest version of Firefox and install it. Be sure to install the DOM Inspector extension as well.

Extract Theme

While you can hypothetically begin with any theme already designed for Firefox, for the sake of consistency we'll speak as though everyone is editing the default Firefox theme. This is located in the file classic.jar found in the Firefox installation directory. A .jar file is in reality a renamed zip archive. Opening the .jar files in your archive manager of choice should result in the file being automatically detected as being a zip archive. However, if your application doesn't detect classic.jar as a standard zip archive, rename the file classic.zip and continue extraction.
Classic.jar locations

Linux: /usr/lib/MozillaFirefox/chrome/classic.jar or /usr/lib/firefox-*.*.*/chrome/classic.jar

Windows: \Program Files\Mozilla Firefox\chrome\classic.jar

For Mac OS X:

* Go to your applications folder
* Control click application icon(Firefox icon), choose Show Package Contents.
* Go to contents/MacOS/Chrome/classic.jar

Copy classic.jar to another easily accessible folder -- Classic is recommended -- extract the contents of that folder, being sure to maintain the directory structure.
Directories

Inside classic.jar is one directory, skin, as well as two files, preview.png and icon.png.

skin
skin simply contains another directory, classic which holds all the good stuff.
skin\classic
classic contains the following directories.
skin\classic\browser
browser contains all the toolbar icons, as well as the icons for the bookmark manager and the preferences window.
skin\classic\communicator
Doesn't do a whole lot and can typically be forgotten about promptly.
skin\classic\global
global contains almost all of the important CSS files that define the appearance of the browser. This is the most critical directory in a theme.
skin\classic\help
help contains all the files for theming the help dialog window.
skin\classic\mozapps
mozapps contains all the styles and icons for the browser peripherals, such as the extension manager or update wizard.

Install Your New Theme

Before you can see the changes you make to a Firefox theme (since live edits are restrictively difficult to set up) you must first learn how to repackage the classic theme to make it installable. For the sake of this discussion we will call your theme "My_Theme", though you can replace that with any name.
Copying The Necessary Files

The first step is to move all the files into the right directory structure. So create a new directory called My_Theme. Into this directory put the browser, global, communicator, help, and mozapps directories from above, as well as the icon.png and preview.png files. (Yes, this means that the structure of your new directory and classic.jar will be slightly different.)
Creating the Install Files
Contents.rdf

Make a copy of contents.rdf and place it in \My_Theme and open it up in your text editor. This file is a small XML Database which is used to describe the skin.

In the code search for all instances of "My_Theme" and replace them with the name of your theme.

The packages section lists which components of the browser you're modifying. If we also had skins for Chatzilla, we would need to add another line resembling the other ones and change it to point to Chatzilla. But this list includes everything that we changed, so just modify the blue text to point to match the name/version that you used in the sections before this.

Save the file and exit the text editor.
install.rdf

Make a copy of install.rdf and place it in the My_Theme directory, then open it up in your text editor. This file is a small XML database that describes the skin.


{Themes_UUID}
Themes_Version

The first section requires that you establish a UUID for your theme and that you give your theme a version number. Once you've done this, insert the information as above, and scroll down.

You will also have to update the minimum and maximum compatible versions for the target application (Firefox) in the following section:




{ec8030f7-c20a-464f-9b0e-13a3a9e97384}
Min_FF_Version
Max_FF_Version



Establishing both minimum and maximum compatible versions lets you avoid conflicts with versions of Firefox your theme wasn't designed for -- or wasn't tested on.

See Install Manifests for the reference information about the install.rdf file.
CSS Files

The CSS files in these directories tell the browser how to display the buttons and other controls, where to put the images, what border and padding it should put around them, and so on.

As an example, let's change the standard button.

Go into the global global directory and open button.css in your favorite text editor. Scroll down to button {. This section defines the normal button in its basic state (There is no mouse over it, it's not disabled, and it's not selected).

Change the background-color: to DarkBlue and the color: to White, and save the file.

more after I get done with some tests
Repackaging JAR

Now all you need to do is repackage a JAR file with the following directory structure, using your favorite archive manager to create a zip archive:

/browser/*
/communicator/*
/global/*
/help/*
/mozapps/*
/contents.rdf
/install.rdf
/icon.png
/preview.png


Make sure not to just zip up the My_Theme parent directory since that will cause the drag and drop install in the next section to fail without error messages. Once you have put the files in the zip folder, rename it to My_Theme.jar
Triggering the install from the web

To install the theme's JAR file directly from the web, you need to run some JavaScript.
Sunday, October 17, 2010

Email security with Gmail's Security Checklist

Gmail's support site has a security checklist that's useful if you want to make sure that your Gmail account is secure. There are some obvious tips like updating your operating system and your browser, but Google also posted some advanced tricks:

1. "Check the list of websites that are authorized to access your Google Account data. Make sure that the list of authorized websites are accurate and ones that you have chosen. If your Google Account has been compromised recently, it's possible that the bad guys could have authorized their own websites to access your account data." To edit the list of authorized websites, go to this page.

2. "Check your browser for plug-ins, extensions, and third-party programs/tools that require access to your Google Account credentials. Plug-ins and extensions are downloadable computer programs that work with your browser to perform specific tasks. For example, you may have downloaded a plug-in or extension that checks your Gmail inbox for new messages. Google can't guarantee the security of these third party services. If those services are compromised, so is your Gmail password."

3. "Confirm the accuracy of your mail settings to ensure that your mail stays and goes where you want it to. Sign in to your account and click on the Settings link at the top to check the following tabs:

* General: Check Signature, Vacation Responder, and/or canned responses for spammy content
* Accounts: Verify your Send Mail As, Get mail from other accounts, and Grant access to your account are all accurate.
* Filters: Check that no filters are sending your mail to Trash, Spam, or forwarding to an unknown account.
* Forwarding and POP/IMAP: Ensure your mail isn't sent to an unknown account or mail client."

4. "Check for any strange recent activity on your account. Click the Details link next to the 'Last Account Activity' entry at the bottom of your account to see the time, date, IP address and the associated location of recent access to your account."

5. "Use a secure connection to sign in. In your Gmail settings, select 'Always use HTTPS.' This setting protects your information from being stolen when you're signing in to Gmail on a public wireless network, like at a cafe or hotel."
Wednesday, October 13, 2010

Membuat sendiri sistem kuota ala Speedy

paket kuota speedy imagesAnda pernah merasa jengkel, karena kapasitas internet anda terbatas? mungkin hal itu serign dialami oleh para pengguna telkom speedy, terutama yang menggunakan kuota 50 jam atau kuota pemakaian. Kali ini kartolo mencoba untuk menjelaskan cara membuat dan setting kuota internet ala speedy, dengan pembatasan berdasarkan waktu dan kapasitas pemakaian tentunya.

Bagi anda yang awam tentang kuota internet, sebagai contoh : user A di berikan batas waktu pemakaian 8 jam, dan 50 MB perharinya. Walaupun sebelum 8 jam kapasitasnya sudah full maka pengguna terpaksa menunggu hari berikutnya untuk bisa kembali bermain. Untuk setting kuota internet ini, kita tidak perlu khawatir, karena anda bisa mengikuti dan menyesuaikan dengan setting yang telah saya gunakan.

System quota ini menggunakan sebuah paket aplikasi tambahan, Squish. Yang penulis gunakan adalah squish versi 0.0.18. Cara ini kita praktikkan menggunakan linux, tepatnya distro fedora core 4, yang didalamnya telah terinstall paket squid. Berikut adalah peralatan yang kita butuhkan nantinya :

- gd-2.0.33-2.i386.rpm
- perl-GD-2.35-1.fc4.i386.rpm
- squish-0.0.18.tar.gz


Download dulu paket-paket di atas dengan ketik:

# wget http://h1.ripway.com/ilmuwebsite2/gd-2.0.33-2.i386.rpm
# wget http://h1.ripway.com/ilmuwebsite2/perl-GD-2.35-1.fc4.i386.rpm
# wget http://h1.ripway.com/ilmuwebsite2/squish-0.0.18.tar.gz

Kemudian Install paket gd-2.0.33-2.i386.rpm, dan perl-GD-2.35-1.fc4.i386.rpm

# rpm –ivh gd-2.0.33-2.i386.rpm
# rpm –ivh perl-GD-2.35-1.fc4.i386.rpm

Setelah itu ekstrak file squish-0.0.18.tar.gz :
# tar –xzvf squish-0.0.18.tar.gz

Terdapat sebuah direktori baru disana, squish-0.0.18, kemudian masuk kedalamnya, kemudian install :
# cd squish-0.0.18
# make install

Pindah ke direktori di mana squish berada :
# cd /usr/local/squish/

Kemudian jalankan option run pada file squish.pl, ini dilakukan untuk membuat sebuah tampilan awal dari pemakaian bandwith :

# ./squish.pl –install

Dengan fasilitas crontab tambahkan sebuah perintah baru untuk daemon crond :

# crontab -e
5,10,15,20,25,30,35,40,45,50,55 * * * * /usr/local/squish/squish.cron.sh

Kemudian tekan ESC :x !

Jalankan perintah baru tersebut untuk pertama kalinya :


# /usr/local/squish/squish.cron.sh

Kemudian kita tambahkan authentikasi ncsa_auth pada file konfigurasi squid.conf yang terletak di /etc/squid/squid.conf :

# nano /etc/squid/squid.conf
# baris ini ditambahkan di area authentikasi

auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/passwd
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours

# yang ini ditambahkan pada area acl
acl ncsa proxy_auth REQUIRED

# kemudian simpan
Setelah itu edit di bagian bawah baris kalimat ini

### added by squish (begin)

Menjadi seperti ini :

# acl’s for squish – autodetected, sometimes
acl SQUISHLOC dst ns.multimedia.com
acl SQUISHED1 proxy_auth -i “/etc/squid/squished”

#acl SQUISHED2 ident “/etc/squid/squished”
acl SQUISHED3 src “/etc/squid/squished”
acl password proxy_auth REQUIRED

# Error info that says you’re squished
deny_info http://ns.multimedia.com/squish/?squished& SQUISHED1

# deny_info http://ns.multimedia.com/squish/?squished& SQUISHED2
deny_info http://nd.multimedia.com/squish/?squished& SQUISHED3

# HTTP access controls for squish

http_access allow SQUISHLOC
http_access allow password !SQUISHED1
http_access deny SQUISHED1

# http_access deny SQUISHED2

http_access deny SQUISHED3

### added by squish (end )

http_access allow ncsa


Kemudian, edit file konfigurasi httpd :

# nano /etc/httpd/conf/httpd.conf
# tambahkan baris berikut di paling bawah dari file konfigurasi tersebut :

include /usr/local/squish/apache-squish.conf

Kemudian edit file apache-squish.conf :
# nano /usr/local/squish/apache-squish.conf

Edit file tersebut menjadi seperti ini :
Alias /squish “/usr/local/squish/”


Options +ExecCGI
AddHandler cgi-script .cgi
DirectoryIndex squish.cgi
AllowOverride None
Order allow,deny
Allow from all


Di bagian terakhir, anda cukup membuat user yang diperbolehkan untuk mengakses internet, dengan membuat sebuah file yang berisi user yang diperbolehkan login :

# htpasswd /etc/squid/passwd mamang

Berikan permission r untuk user lain agar file /etc/squid/passwd dapat dibaca oleh apache.

# chmod o+r /etc/squid/passwd

Kemudian restart service squid dan httpd :

# service squid restart
# service httpd restart

Untuk melakukan pembatasan pemakaian pada user, silahkan edit file konfigurasi squish.conf

# nano /etc/squid/squish.conf

squish.conf:
# This file contains data formatted as follows:
#
# Blank lines and hashed stuff is for comments
# user amount/period
# bandwidth: 999[kmG]b / period: day, week, month
# time: 999[smh] / period: day, week, month
#
# Whitelist entries – they can have as much as they like

192\.168\.99\.44 25h/day
192\.168\.97\.43 25h/day

mamang 12h/day 120Mb/day

# Poor guy:
root 1h/day 1Mb/day 2Mb/week

# Catchall — people and IP’s not matched by the above rules
.* 4h/day 20Mb/day 20h/week 100Mb/week


Selesai, begitulah garis besarnya, dengan begitu kita dapat membagi sendiri kuota pemakaian user tanpa perlu khawatir akan kelebihan pemakaian dsb. Jika ada pertanyaan, silahkan kontak kartolo. :o


disadur dari : ilmuwebsite
Friday, October 8, 2010

System Restore dengan Safe Mode Command Prompt

System restore sebenarnya tidak hanya dilakukan melalui jendela windows. Cara lain yang sebenarnya juga sangat mudah dan cepat adalah system restore melalui safe mode Command Prompt. Itulah yang akan saya bahas kali ini. Artikel ini berlaku untuk versi Windows XP dan saya belum mencoba pada versi windows terbaru seperti Windows Vista dan Windows 7. Jika Anda punya pengalaman, silakan sharing disini. Isi dalam artikel ini mungkin tidak relevan bagi sistem operasi Anda jika menggunakan versi lain dari windows.

dini saya akan menjelaskan cara untuk memulai System Restore dalam safe mode dengan menggunakan Command prompt (SAFEBOOT_OPTION = Minimal (AlternateShell)). Pilihan ini digunakan ketika Anda tidak bisa menjalankan sistem restore melalui jendela Windows XP.

System Restore merupakan sebuah tool yang secara otomatis memantau dan mencatat setiap perubahan yang dibuat pada file sistem Windows dan registri. Jika perubahan menyebabkan sistem anda menjadi tidak stabil, System Restore dapat membatalkan (atau “roll back”) sistem kesebuah titik waktu ketika komputer Anda masih dapat berfungsi dengan benar.

Bagaimana memulai System Restore menggunakan Command prompt, syarat pertama adalah Anda harus masuk ke Windows dengan account administrator, bukan sebagai user limited. Untuk memverifikasi bahwa Anda logon ke Windows dengan account pengguna sebagai administrator komputer, kunjungi Website Microsoft berikut ini:
http://support.microsoft.com/gp/admin (http://support.microsoft.com/gp/admin)
Jika sebuah program baru telah membuat komputer Anda crash, dan menguninstall program baru ini tidak dapat membantu, Anda dapat mencoba Windows XP System Restore melalui command prompt.

Syarat kedua adalah sistem restore hanya dapat dilakukan jika sebelumnya Anda mengaktifkan System Restore pada windows, jika tidak maka Anda tidak dapat mengembalikan komputer Anda ke keadaan sebelumnya.

Untuk memulai System Restore menggunakan Command prompt, ikuti langkah berikut:

1. Restart komputer Anda, kemudian tekan terus F8 pada saat startup awal untuk memulai komputer Anda dalam safe mode dengan Command prompt.
2. Gunakan tombol panah untuk memilih Safe Mode dengan opsi Command prompt.
3. Jika Anda diminta untuk memilih sistem operasi, gunakan tombol panah untuk memilih sistem operasi yang sesuai untuk komputer Anda, kemudian tekan ENTER.
4. Logon sebagai administrator atau dengan account yang memiliki level administrator.
5. Jika Anda sudah masuk pada jendela command prompt silakan ketik: %systemroot%/system32/restore/rstrui.exe, dan kemudian tekan ENTER.
6. Ikuti petunjuk yang muncul di layar untuk mengembalikan komputer Anda ke keadaan semula sebagaimana layaknya sebelum mengalami kerusakan.
Wednesday, October 6, 2010

Google Rank :How to Get Top Rankings in Google

How to get top rankings in Google, you can be on page one for the right targets, targets that are relevant to what you want to do with your web.

There are no SEO silver bullets. Stop looking for them and get to work. If you don't know what the work that you need to do is, go read my better search engine rankings primer written in plain language.

Google has become the most powerful search engine on the internet. My opinion why this is is that Google delivers the most relevant search results. By relevant, I mean that if you were searching for any of those phrases bulleted above and found this page, you found a web page very close to what you were looking for? Relevance creates conversions to inquiries and sales! Relevance is GOOD. It's conversions that matter, not hits!

Let me clarify that last statement a little: first you have to have rankings, then you have to have clicks, then you need conversions. So, hits do matter, but only if your content is creating inquiries and sales? That's the way I see it.

In January 2010, in the USA market Google received 66.3% of all searches, Yahoo 14.5%, Microsoft Bing 10.9%, AOL Search 2.5% and Ask.com 1.9%. The total number of searches in January 2010 was 10,272,099,000.

The process I use to get my clients top rankings on Google and the other major search engines is basically as follows, and I am going to link you off to pages in my web that provide more details about each of these subjects. You can also read this page about my SEO Project Management

Perform a Google (and other search engines) structural compliance analysis. This is something very few SEO folks do. Why? Perhaps they don't know? Google provides recommendations to webmasters in their Webmaster Guidelines, which is public information. Google also has new patents that indicate what is important to them in ranking web pages. I also use up to the minute search engine research and know what works with Google's algorithm and what doesn't. I am not talking about anything sneaky or unethical - black hat bad guy tactics will not work for long - I don't do them. I am talking about what Google says they like and want. You do want Google to be happy, don't you?

Want to see an example of the kind of similar information for beginners? Check out my Free SEO Help files, request a copy of my free help files and sign up for your Free SEO Tip of the Day at the top left in the navigation on this page. John Alexander is a great SEO teacher and good friend.

Go to Google and search for free SEO help files and see where my page is ranked? What I do works for me?

Understand your marketing plan. You need to understand your marketing plan and I need to understand what you want your web to do for you if you want me to help you. You do have a plan, right? Complete my SEO Questionnaire so that I can also understand your priorities and your market. This information gets me all set up to help you get top rankings on Google. It is a good marketing exercise for you.

Do search phrase research. We can determine through search phrase research exactly what people are searching for relative to what you are doing with your web. Read more about Keyword Services. I am in the top four out of about 285 million competitors last time I checked.

The complexity of the way people search is increasing. The long tail is getting longer; 1- and 2-word search queries are on the decline, while 4- and 5-word queries are rising, while 3-word queries are down slightly. Four-word queries are up 12% since 2007, and five-word queries are up 16%."

Write keyword rich page copy. A web page works best with Google when it has enough text properly written about ONE SUBJECT. What Google's algorithm is trying to determine when it analyzes your web pages is: what is this page about? If you have a mixed message, the algorithm can't tell what the page is about, and you get no rankings.

Read more about Web Copywriting Basics. Also read my home page copywriting tips. Creating pages that work with Google may require some rearrangement of your content by creating new pages. Many webs have good content but the way it is arranged in the web creates mixed messages. Sound familiar? Not uncommon.

Get some links in to your web from authoritative sources. Web pages with Google PageRank's of 4 and above will get you in Google's index with a positive start. For a few hundred dollars you can list your web in paid directories that Google respects and indexes every day. This is the fastest way to be found, much better than submitting directly to Google. The cost to do the best dozen is about $900.

Keep adding content. Google treats webs with more than 100 pages of good content differently from smaller webs. 100 pages sounds like a lot? Not really. Let's discuss all of the ways you can easily add content. Some are free. Read some great ideas for adding content to your website.

Start a Link Building Program When you have done everything you can do to create the best quality content, links in to your website from other websites will be the greatest determining factor for your Google rankings.

Is your web in Google trouble or the supplemental index? We make an appeal to the Google Team telling them that we have cleaned up your act, that you will be a good boy or girl from now on, and that you want to be reinstated in their index. If you made an honest mistake or if you tried to be tricky and got caught, and you will get caught, Google will give you a break one time. Don't do it again.

Write a search engine optimized press release. This will bring large volume traffic to you web in a matter of days instead of weeks or months. I work with a pro in California, a professional writer with a journalism background. Your release will be on the first page of Google, typically in 5-7 days. Cost is about $600. If you are not on Google page one you don't pay the whole fee until you are.

Use Google Sitemaps Using an XML sitemap in your web, in addition to a regular sitemap, is a great way to see what Google sees when indexing your web. The Sitemaps Beta has now been incorporated into your free Google account under Webmaster Tools.


There are a number of XML generators online. I have tested 20 of them, none of which worked correctly for websites with more than 30 pages.

An excellent tool you can download and run from your desktop is GSiteCrawler: Google Sitemap Generator for Windows Be sure to download the latest update.

GSiteCrawler is a better solution for webs with lots of content and for webs where your images are important for image search. GSiteCrawler will index everything including all pages in all directories and your images. It uses 6 simultaneous crawlers to capture your content. Making a sitemap for a web with 2600 pages took me about an hour and a half.

GSiteCrawler also generates the Yahoo Site Explorer urllist.txt file. Read more here in my Sitemaps How to for Google and Yahoo!

Open your free Google account, then request a verification code to show you have the right to monitor the web. The best bet is to add the verification file as a named html page in your website. Google will tell you what to name the file. Add the file to your website same directory as your home page. Do the same with Yahoo Site Explorer.

Then, use GSiteCrawler to crawl your website, put the sitemap in your website online, go back to Google, tell Google the name of the file and they will check it out and approve it.

Takes me about 10 minutes, but I have done a bunch of them. Google will tell you if they find any indexing errors, missing pages, broken links, etc. When you add new pages to your web, go make another sitemap and resubmit it. Other engines are using XML maps, too. This makes Google's job easier, and is a leg up for you on your competitors for get top rankings in Google.

About Me