Followers

Powered by Blogger.
Wednesday, November 17, 2010

Microsoft Surface : the new Generation of multi touch Computer



Microsoft Surface is more than a computer. It’s a leap ahead in digital interaction. By enabling you to use your hands instead of a keyboard and mouse, it revolutionizes the way you interact with digital content, while keeping the ability to connect with other devices such as networks, printers, mobile devices, card readers, and more. Our Microsoft Surface Partners have created hundreds of applications for the platform.

Microsoft Surface is currently based on the Windows Vista platform, which makes it especially easy for companies to manage, deploy and support Microsoft Surface units. The current version for the software platform is Microsoft Surface 1.0 Service Pack 1, which enhances Surface with an enhanced user interface, improved manageability to help reduce the cost of ownership, broader international support, and faster, easier ways to design innovative applications. Read more about Service Pack 1 on the Microsoft Surface Blog.

The sophisticated camera system of Surface sees what is touching it and recognizes fingers, hands, paintbrushes, tagged objects and a myriad of other real-world items. It allows you to grab digital information and interact with the content through touch and gesture. And unlike other touch-screens, Surface recognizes many points of contact simultaneously, 52 to be exact. This allows multiple people to use Surface at the same time, creating a more engaging and collaborative computing experience than what is available via traditional personal computers, mobile devices, or public kiosks.

Tagged object recognition is a particularly innovative feature of Microsoft Surface. The tag is what lets Surface uniquely identify objects, helping the system tell the difference between two identical looking bottles of juice, for example. Applications can also use a tag to start a command or action, so simply placing a tagged object on the screen can open up a whole new experience. A tagged object might also identify a cardholder, so they can charge purchases or participate in a loyalty program. A tag can even tell Surface to display unique information about a tagged object, such as showing more information about a bottle of wine, the wine grower, the type of grape and vintage.

a Microsoft Surface unit is a PC that is running the Windows Vista Business operating system, so in some ways, a Surface unit is like a desktop computer or a server. The units include all of the standard manageability features that are available in Windows Vista to enable easy deployment and administration, including Windows Management Instrumentation (WMI), Remote Desktop Connection, Windows Scripting Host, Event Viewer, Performance Monitor, Event Forwarding, Task Scheduler, and so on. You can also use enterprise management tools like the Microsoft System Center family of products to deploy software, maintain patches, manage configuration, and monitor health.

You can also deploy Microsoft Surface units to remote locations away from the immediate reach of an IT administrator and then use remote management tools and scripting to accomplish administration tasks, such as taking a unit offline, installing applications, monitoring the unit, managing updates, and recovering the system.
Microsoft Surface represents a fundamental change in the way we interact with digital content. Leave the mouse and keyboard behind. Surface lets you grab digital content with your hands and move information with simple gestures and touches. Surface also sees and interacts with objects placed on the screen, allowing you to move information between devices like mobile phones or cameras. The result is a fun, social and exciting computing experience like you’ve never had before.

Microsoft Surface has four key capabilities that make it such a unique experience:

* Direct interaction. Users can grab digital information with their hands and interact with content on-screen by touch and gesture – without using a mouse or keyboard.
* Multi-user experience. The large, horizontal, 30 inch display makes it easy for several people to gather and interact together with Microsoft Surface - providing a collaborative, face-to-face computing experience.
* Multi-touch. Microsoft Surface responds to many points of contact simultaneously - not just from one finger, as with a typical touch screen, but from dozens of contact points at once.
* Object recognition. Users can place physical objects on the screen to trigger different types of digital responses – providing for a multitude of applications and the transfer of digital content to mobile devices.

Microsoft Surface uses cameras and image recognition in the infrared spectrum to recognize different types of objects such as fingers, tagged items and shapes. This input is then processed by the computer and the resulting interaction is displayed using rear projection. The user can manipulate content and interact with the computer using natural touch and hand gestures, instead of a typical mouse and keyboard.
Monday, November 15, 2010

Enable Internet Connection Firewall using VBScript

Windows Firewall helps to protect computers from unsolicited network traffic. The Windows Firewall APIs make it possible to programmatically manage the features of Windows Firewall by allowing applications to create, enable, and disable firewall exceptions.
Windows Firewall API is intended for situations in which a software application or setup program must operate with adjustments to the configuration of the networking environment in which it runs. For example, a service that needs to receive unsolicited traffic can use this API to create exceptions that allow the unsolicited traffic.
Windows Firewall API is designed for use by programmers using C/C++, Microsoft Visual Basic development system, Visual Basic Scripting Edition, and JScript development software. Programmers should be familiar with networking concepts such as stateful packet filtering, TCP/IP protocol concepts, and network address translation (NAT).
Windows Firewall API is supported on Windows XP with Service Pack 2 (SP2). For more specific information about which operating systems support a particular programming element, refer to the Requirements sections in the documentation.

[Internet Connection Firewall may be altered or unavailable in subsequent versions. Instead, use the Windows Firewall API.
The following VBScript code first determines if Internet Connection Sharing and Internet Connection Firewall are available on the local computer. If so, the code enumerates the connections on the local computer, and enables Internet Connection Firewall on the connection that is specified as a command line argument.


' Copyright (c) Microsoft Corporation. All rights reserved.

OPTION EXPLICIT

DIM ICSSC_DEFAULT, CONNECTION_PUBLIC, CONNECTION_PRIVATE, CONNECTION_ALL
DIM NetSharingManager
DIM PublicConnection, PrivateConnection
DIM EveryConnectionCollection

DIM objArgs
DIM con

ICSSC_DEFAULT = 0
CONNECTION_PUBLIC = 0
CONNECTION_PRIVATE = 1
CONNECTION_ALL = 2

Main( )

sub Main( )
Set objArgs = WScript.Arguments

if objArgs.Count = 1 then
con = objArgs(0)

WScript.Echo con

if Initialize() = TRUE then
GetConnectionObjects()

FirewallTestByName(con)
end if
else
DIM szMsg
szMsg = "Invalid usage! Please provide the name of the connection as the argument." & chr(13) & chr(13) & _
"Usage:" & chr(13) & _
" " + WScript.scriptname + " " + chr(34) + "Connection Name" + chr(34)
WScript.Echo( szMsg )
end if

end sub


sub FirewallTestByName(conName)
on error resume next
DIM Item
DIM EveryConnection
DIM objNCProps
DIM szMsg
DIM bFound

bFound = false
for each Item in EveryConnectionCollection
set EveryConnection = NetSharingManager.INetSharingConfigurationForINetConnection(Item)
set objNCProps = NetSharingManager.NetConnectionProps(Item)
if (ucase(conName) = ucase(objNCProps.Name)) then
szMsg = "Enabling Firwall on connection:" & chr(13) & _
"Name: " & objNCProps.Name & chr(13) & _
"Guid: " & objNCProps.Guid & chr(13) & _
"DeviceName: " & objNCProps.DeviceName & chr(13) & _
"Status: " & objNCProps.Status & chr(13) & _
"MediaType: " & objNCProps.MediaType

WScript.Echo(szMsg)
bFound = true
EveryConnection.EnableInternetFirewall
exit for
end if
next

if( bFound = false ) then
WScript.Echo( "Connection " & chr(34) & conName & chr(34) & " was not found" )
end if

end sub

function Initialize()
DIM bReturn
bReturn = FALSE

set NetSharingManager = Wscript.CreateObject("HNetCfg.HNetShare.1")
if (IsObject(NetSharingManager)) = FALSE then
Wscript.Echo("Unable to get the HNetCfg.HnetShare.1 object")
else
if (IsNull(NetSharingManager.SharingInstalled) = TRUE) then
Wscript.Echo("Sharing isn't available on this platform.")
else
bReturn = TRUE
end if
end if
Initialize = bReturn
end function

function GetConnectionObjects()
DIM bReturn
DIM Item

bReturn = TRUE

if GetConnection(CONNECTION_PUBLIC) = FALSE then
bReturn = FALSE
end if

if GetConnection(CONNECTION_PRIVATE) = FALSE then
bReturn = FALSE
end if

if GetConnection(CONNECTION_ALL) = FALSE then
bReturn = FALSE
end if

GetConnectionObjects = bReturn

end function


function GetConnection(CONNECTION_TYPE)
DIM bReturn
DIM Connection
DIM Item
bReturn = TRUE

if (CONNECTION_PUBLIC = CONNECTION_TYPE) then
set Connection = NetSharingManager.EnumPublicConnections(ICSSC_DEFAULT)
if (Connection.Count > 0) and (Connection.Count < 2) then
for each Item in Connection
set PublicConnection = NetSharingManager.INetSharingConfigurationForINetConnection(Item)
next
else
bReturn = FALSE
end if
elseif (CONNECTION_PRIVATE = CONNECTION_TYPE) then
set Connection = NetSharingManager.EnumPrivateConnections(ICSSC_DEFAULT)
if (Connection.Count > 0) and (Connection.Count < 2) then
for each Item in Connection
set PrivateConnection = NetSharingManager.INetSharingConfigurationForINetConnection(Item)
next
else
bReturn = FALSE
end if
elseif (CONNECTION_ALL = CONNECTION_TYPE) then
set Connection = NetSharingManager.EnumEveryConnection
if (Connection.Count > 0) then
set EveryConnectionCollection = Connection
else
bReturn = FALSE
end if
else
bReturn = FALSE
end if

if (TRUE = bReturn) then

if (Connection.Count = 0) then
Wscript.Echo("No " + CStr(ConvertConnectionTypeToString(CONNECTION_TYPE)) + " connections exist (Connection.Count gave us 0)")
bReturn = FALSE
'valid to have more than 1 connection returned from EnumEveryConnection
elseif (Connection.Count > 1) and (CONNECTION_ALL <> CONNECTION_TYPE) then
Wscript.Echo("ERROR: There was more than one " + ConvertConnectionTypeToString(CONNECTION_TYPE) + " connection (" + CStr(Connection.Count) + ")")
bReturn = FALSE
end if
end if
Wscript.Echo(CStr(Connection.Count) + " objects for connection type " + ConvertConnectionTypeToString(CONNECTION_TYPE))

GetConnection = bReturn
end function

function ConvertConnectionTypeToString(ConnectionID)
DIM ConnectionString

if (ConnectionID = CONNECTION_PUBLIC) then
ConnectionString = "public"
elseif (ConnectionID = CONNECTION_PRIVATE) then
ConnectionString = "private"
elseif (ConnectionID = CONNECTION_ALL) then
ConnectionString = "all"
else
ConnectionString = "Unknown: " + CStr(ConnectionID)
end if

ConvertConnectionTypeToString = ConnectionString
end function






Thursday, November 11, 2010

Download, Install and Run the Android Emulator


In this tutorial we are going to learn how to install any given Android application in Android emulator. i will help you to setup the android 1.1 and 1.5 emulator.

You must have java installed on your machine before installing the Android emulator.

Install Java from here: Go to java.com and install to java.

If you already installed java or recently then you must know that the version of Java installed on your machine is correct or not.

Test installed java from here: Go java.com to check whether you have correct version of java installed or not. If you fine that correct version is not installed then follow the instruction provided in page and install the correct version of java.

After completion of java installation successfully follow below steps to install the Android emulator.


download the android emulator

Android emulator comes with Android sdk.You can download latest sdk from:android.com

How to install android emulator

Downloaded android sdk will be in compressed format.Uncompressed it to the desired location [e.g. C:\Emulator]and you are done.

Android sdk commands for emulator

Launch the command prompt: Start>Run>Cmd.Locate the emulator.exe path where the android got installed, in this case go to C:\Emulator directory and navigate to tools directory e.g:C:\Emulator\Android\android-sdk-windows-1.5_r2\tools where emulator.exe is present.Navigate to tools directory path in command prompt like shown below.

Type: Emulator.exe -help and execute it. You will find the list of the useful commands related to emulator usage.

* Type: Android -h and execute it. You will find the list of the useful commands related to Android usage.

android virtual device

Android virtual device is the emulator that we are going to create based on the specific platform of android
.E.g. Android1.1, Abdroid1.5, Android2.0 etc.

* Type: Emulator.exe -help-virtual-device and execute it for more information about android virtual device.

setup the Android virtual device

This is the command format to create the android virtual device: android create avd -n [UserDefinedNameHere] -t [1 or 2 or 3 etc. based on which android platform you want to create]

* If you want to create android virtual device based on Android1.1 platform then type:android create avd -n MyAndroid1 -t 1
* If you want to create android virtual device based on Android1.5 platform then type:android create avd -n MyAndroid2 -t 2
* After executing one of the above command you will be prompted for the message like below to create custom hardware profile. Type no if you don't want to create custom hardware profile.

* Now after successful setup of android virtual device you will be prompted message like this depending on what platform android virtual device is created:"Created AVD 'MyAndroid1' based on Android1.1 or Created AVD 'MyAndroid2' based on Android1.5"

list of created android virtual devices

Type: Android list avd : This command will display the list of the created android virtual devices as shown in below screen shot.

launch created android virtual device or emulator

A: Type:emulator.exe -avd [name of the emulator created] : In our case type:emulator.exe -avd MyAndroid1 or emulator.exe -avd MyAndroid2

Now you can see the emulator launched as below. Wait some time to get it initialized , ignore any error message if it appears if it is not blocking,It will also get the Internet connection automatically so you do not need to do anything to surf Internet in emulator. Now you are ready to start you application on emulator !

Tuesday, November 9, 2010

VoIP System Standart Features

Voice over IP (VoIP) systems today are gaining in popularity today for several reasons, most notable are the availability of so many open source and commercial options, the high degree of available interface devices that allow you to connect to existing circuit based networks and hardware, and the ability to create full end to end IP based solutions using available high-speed links or gateways with commercial trunk providers. However, it is still easy to make a poor purchasing decision unless you take a good look at the basic requirements of what a communication system needs to provide to be effective in your business and help you achieve your cost savings while still maintaining your ability to be connected to your clients and vendors in an effective manner. My intent here is to help outline the 5 standard features that your VoIP system should have for it to be considered the proper solution for your enterprise.

Examination

Standardization and Flexibility

Just like Henry Ford grew the automobile business based upon the obvious concept of standardization, the VoIP industry gained its recent focus and popularity based upon the same ideals. Gone are the days of having to select and architect a solution based upon the supported protocols of one vendor and become locked in. The two major signaling protocols you will see today, H.323 and SIP are the largest players, with H.323 starting to lag behind as SIP gains in popularity and support, more compliant vendors, and continued enhanced to support more media streams and tighter device integration that up until recently has been what was keeping H.323 ahead in the game. Keeping in mind that H.323 is still the more mature technology, currently it is holding its ground in the carrier space and is used quite extensively as a trunk side protocol. This has allowed SIP to comfortably gain foothold in the enterprise space as a local carrier style protocol that is simpler to implement, troubleshoot, and extend with new features as needed. It is clear that for a system to be considered future proof it has to support the currently prevalent standards but also allow ‘plug ability’ and offer support for emerging standards, or alterations to the existing standards as well.

Integration Options

Unless you are starting from scratch you are most likely attempting to integrate a VoIP option into your existing infrastructure as part of a phased deployment strategy. The current list of options to accomplish this is growing longer and longer every day, and that has many benefits for the consumer with regards to architecture, cost, interoperability options, and the quicker movement to tighter standards compliance between vendors. There is no longer a need to consider the move to VoIP to be an all or nothing deal with the introduction of gateway devices that allow you to leverage your current investment in older TDM based equipment and enhance it with the newer IP based messaging solutions. Doing this allows you to add new devices to the newer IP network while maintaining a rich level of integration with the legacy TDM equipment until it lives out its natural (and still depreciable) life span.

Security

As with everything today, security is a huge consideration when you start to think about moving to VoIP. If it is not something that you have already though about, or your vendor has not discussed it deeply with you then I urge you to stop reading this right now, go to your vendor and ASK how secure your current VoIP implementation (or the one you are planning to install) is.

Security is so critical in today’s business market, but because people have felt so safe in the past using TDM voice infrastructures where ‘tapping’ meant actually making a physical connection to the ‘wires’, the thought of security quite often eludes people when you starting the talk about VoIP. It is so easy to fire up a copy of Wireshark on your network, collect some packets, and use the tools built right into the GUI to listen to VoIP conversations. So, what do you do? The answer is simple. You turn on encryption and ensure that every device you use within in your IP infrastructure involved in the call supports the encryption scheme that you pick. Keep in mind that while encryption is good, it does add CPU load to your devices, can cause higher network utilization because the packets can get larger, and can add complexity to any troubleshooting efforts, but you should NOT implement any VoIP system without taking security into consideration. One fine option that I have used is to consider your internal network to be secure and just encrypt the calls that pass via IP between external parties (over your IP based trunks if you use them) or between all your company locations using the public IP network. It’s my feeling that as long as you keep your internal IP infrastructure secure (IE: tight controls on who can enter your IT area, and you are using switches rather than hubs), bothering to encrypt internal IP connections is not always needed because in a properly configured environment you will not be able to capture VoIP data other than what is directed to you. If you are using hubs then all bets are off of course. That is a subject for another article.

Support

As with most other technologies, part of your purchasing and deployment planning MUST be to take the support model into consideration. Don’t just assume that your existing vendor that supports your current internet connection to the external world is going to be there if you have IP trunk problems, understand what terminology like QoS, jitter, and other VoIP lingo means, or even how to correct them if problems occur. As when introducing any new technology you need to have a sit-down with everyone involved in the proposed value chain and establish an understanding of expectations, possible support needs, costs, and schedules. You may find out that your provider is by default blocking the native ports that the typical VoIP protocols require simply because they are not used to requiring them to be open for other clients. If you are ready to move to IP for your voice communications just understand that while you may have been willing to put up with slowness in the afternoons when you tried to use the web to order your dinner so you could pick it up on the way home, a slow data connection can wreak havoc with voice quality and the ability to establish a call. You may need to consider a separate IP connection just dedicated to voice, and in fact you may need to start considering redundant connections using two different providers for your voice if you have not already done so for your data. Additionally, ensure that your vendor has the proper debugging tools in place, knows how to use them, and is willing to offer training to your staff, or that you are willing to use third parties to get them trained, so that they can be used to keep the system running in top condition. Remember that voice communication is still considered a top priority in today’s business world and loosing that, even for a few hours, can make a customer start looking for someone to replace you as a vendor.

Extension Points

Many people today are used to just using the phone to talk, or maybe send faxes, but once the move to IP is made the benefits will start to bring on questions about other methods of application integration, and additional ways to leverage the new communications system. One thing that you should always consider on any new system, not just VoIP, are ways that you can utilize it going forward for things other than just your current needs. A car would not be much use if you could only drive it back and forth to work would it? The same goes for your telephony solution. Right from the start you should consider investigating the extension areas of all the solutions you look at and at least gain an understanding of the features and benefits that each system may or may not offer. For example, it could be very disappointing to get a system all in place and six months latter determine that you still need to add a bank of analog trunk lines to and receive faxes because your solution did not include the ability for Fax over IP (FoIP) codecs. Making blanket assumptions like ‘just because traditional faxes use our existing voice lines the VoIP system should also do fax’ can lead to some very tense moments across a boardroom table. Also consider integration with other areas like Instant Messaging (IM) and application integration such as the ability to build basic Interactive Voice Response (IVR) menus (IE: ‘To talk to support, please press 1, to talk to sales, please press 2,…) into the system and create simple auto attendant applications. These simple features can help add some great value that may not have been considered previously, and allow you to recoup the costs of a VoIP implementation over a shorter timeline than previously anticipated. Areas like this can allow you to bring systems together under one area and thus cut down the size of your external vendor list.
Conclusion

As you can see, the move to VoIP is fraught with decisions, technical considerations, and even some simple human capital management opportunities, but the gains in productivity, efficiency, and the ability to leverage existing infrastructure and gain some valuable benefits in the areas of long term manageability, application integration, multi-modal communications options and simpler to manage infrastructure far outweigh the potential problems as long as the map forward is well thought out and planned. As with most IT based business decisions it is always good to ensure that everyone understands the possible features and benefits as well as the potential risks and how they can be mitigated to derive the value that is expected.

Disclosures and References

As part of my previous experience in VoIP technology I spent 10 years as a product and training specialist working for both Intel Corporation and Dialogic Corporation in the area of Digital PBX TDM to IP interfacing with regards to the Netstructure PBX/IP Media gateway product line. In addition, my secondary focus was working closely with vendors offering tightly integrated VoIP solutions such as Microsoft Exchnage Unified Messaging and IBM Lotus Sametime using these devices as well as the design and development of product training classes, certification programs for user and administrative positions, and product documentation collateral.

Sunday, November 7, 2010

Increase Visitor Traffic for your website

Increase website traffic is one of the most important point to grow up your site.If you have a website—especially one you want to monetize—this is undoubtedly a question you’ve asked yourself. Often, the single biggest impact on the level of web traffic your site gets is its search engine page ranking or SERP. Major search engines like Yahoo!, Google, Ask, and MSN, however, choose to add some intrigue to the pursuit of rising up the search engine results page (SERP) ladder by keeping the majority of their metrics a secret. The resulting effect leaves the webmaster and search community with the daunting task of sifting through rumors and experience to shorten the slog through massive amounts of trial and error needed to figure out what really works.

But fear not good people of the internet! There is hope on the horizon yet. Lucky for all of us there are more people with great ideas and ambition trying to solve the riddles than there are riddle masters holding us back. Thanks to these internet superheroes much of the mystery that shrouds effective search engine marketing has been lifted, and what we are left with is a much clearer picture of what works and what does not.

Choose the Right Blog Software, The right blog CMS makes a big difference. If you want to set yourself apart, I recommend creating a custom blog solution - one that can be completely customized to your users. In most cases, WordPress, Blogger, MovableType or Typepad will suffice, but building from scratch allows you to be very creative with functionality and formatting. The best CMS is something that's easy for the writer(s) to use and brings together the features that allow the blog to flourish. Think about how you want comments, archiving, sub-pages, categorization, multiple feeds and user accounts to operate in order to narrow down your choices. OpenSourceCMS is a very good tool to help you select a software if you go that route.

Host Your Blog Directly on Your Domain, Hosting your blog on a different domain from your primary site is one of the worst mistakes you can make. A blog on your domain can attract links, attention, publicity, trust and search rankings - by keeping the blog on a separate domain, you shoot yourself in the foot. From worst to best, your options are - Hosted (on a solution like Blogspot or Wordpress), on a unique domain (at least you can 301 it in the future), on a subdomain (these can be treated as unique from the primary domain by the engines) and as a sub-section of the primary domain (in a subfolder or page - this is the best solution).

Write Title Tags with Two Audiences in Mind, First and foremost, you're writing a title tag for the people who will visit your site or have a subscription to your feed. Title tags that are short, snappy, on-topic and catchy are imperative. You also want to think about search engines when you title your posts, since the engines can help to drive traffic to your blog. A great way to do this is to write the post and the title first, then run a few searches at Overture, WordTracker & KeywordDiscovery to see if there is a phrasing or ordering that can better help you to target "searched for" terms.

Participate at Related Forums & Blogs, Whatever industry or niche you're in, there are bloggers, forums and an online community that's already active. Depending on the specificity of your focus, you may need to think one or two levels broader than your own content to find a large community, but with the size of the participatory web today, even the highly specialized content areas receive attention. A great way to find out who these people are is to use Technorati to conduct searches, then sort by number of links (authority). Del.icio.us tags are also very useful in this process, as are straight searches at the engines (Ask.com's blog search in particular is of very good quality).

Tag Your Content, Technorati is the first place that you should be tagging posts. I actually recommend having the tags right on your page, pointing to the Technorati searches that you're targeting. There are other good places to ping - del.icio.us and Flickr being the two most obvious (the only other one is Blogmarks, which is much smaller). Tagging content can also be valuable to help give you a "bump" towards getting traffic from big sites like Reddit, Digg & StumbleUpon (which requires that you download the toolbar, but trust me - it's worth it). You DO NOT want to submit every post to these sites, but that one out of twenty (see tactic #18) is worth your while.

Launch Without Comments, There's something sad about a blog with 0 comments on every post. It feels dead, empty and unpopular. Luckily, there's an easy solution - don't offer the ability to post comments on the blog and no one will know that you only get 20 uniques a day. Once you're upwards of 100 RSS subscribers and/or 750 unique visitors per day, you can open up the comments and see light activity. Comments are often how tech-savvy new visitors judge the popularity of a site (and thus, its worth), so play to your strengths and keep your obscurity private.

Don't Jump on the Bandwagon, Some memes are worthy of being talked about by every blogger in the space, but most aren't. Just because there's huge news in your industry or niche DOES NOT mean you need to be covering it, or even mentioning it (though it can be valuable to link to it as an aside, just to integrate a shared experience into your unique content). Many of the best blogs online DO talk about the big trends - this is because they're already popular, established and are counted on to be a source of news for the community. If you're launching a new blog, you need to show people in your space that you can offer something unique, different and valuable - not just the same story from your point of view. This is less important in spaces where there are very few bloggers and little online coverage and much more in spaces that are overwhelmed with blogs (like search, or anything else tech-related).

Link Intelligently, When you link out in your blog posts, use convention where applicable and creativity when warranted, but be aware of how the links you serve are part of the content you provide. Not every issue you discuss or site you mention needs a link, but there's a fine line between overlinking and underlinking. The best advice I can give is to think of the post from the standpoint of a relatively uninformed reader. If you mention Wikipedia, everyone is familiar and no link is required. If you mention a specific page at Wikipedia, a link is necessary and important. Also, be aware that quoting other bloggers or online sources (or even discussing their ideas) without linking to them is considered bad etiquette and can earn you scorn that could cost you links from those sources in the future. It's almost always better to be over-generous with links than under-generous. And link condoms? Only use them when you're linking to something you find truly distasteful or have serious apprehension about.

Invite Guest Bloggers, Asking a well known personality in your niche to contribute a short blog on their subject of expertise is a great way to grow the value and reach of your blog. You not only flatter the person by acknowledging their celebrity, you nearly guarantee yourself a link or at least an association with a brand that can earn you readers. Just be sure that you really are getting a quality post from someone that's as close to universally popular and admired as possible (unless you want to start playing the drama linkbait game, which I personally abhor). If you're already somewhat popular, it can often be valuable to look outside your space and bring in guest authors who have a very unique angle or subject matter to help spice up your focus. One note about guest bloggers - make sure they agree to have their work edited by you before it's posted. A disagreement on this subject after the fact can have negative ramifications.

Eschew Advertising, Usually, I ignore adsense, but I also cast a sharp eye towards the quality of the posts and professionalism of the content when I see AdSense. That's not to say that contextual advertising can't work well in some blogs, but it needs to be well integrated into the design and layout to help defer criticism. Don't get me wrong - it's unfair to judge a blog by its cover (or, in this case, its ads), but spend a lot of time surfing blogs and you'll have the same impression - low quality blogs run AdSense and many high quality ones don't. I always recommend that whether personal or professional, you wait until your blog has achieved a level of success before you start advertising. Ads, whether they're sponsorships, banners, contextual or other, tend to have a direct, negative impact on the number of readers who subscribe, add to favorites and link - you definitely don't want that limitation while you're still trying to get established.

Go Beyond Text in Your Posts, Blogs that contain nothing but line after line of text are more difficult to read and less consistently interesting than those that offer images, interactive elements, the occasional multimedia content and some clever charts & graphs. Even if you're having a tough time with non-text content, think about how you can format the text using blockquotes, indentation, bullet points, etc. to create a more visually appealing and digestible block of content.

Cover Topics that Need Attention, In every niche, there are certain topics and questions that are frequently asked or pondered, but rarely have definitive answers. While this recommendation applies to nearly every content-based site, it's particularly easy to leverage with a blog. If everyone in the online Nascar forums is wondering about the components and cost of an average Nascar vehicle - give it to them. If the online stock trading industry is rife with questions about the best performing stocks after a terrorist threat, your path is clear. Spend the time and effort to research, document and deliver and you're virtually guaranteed link-worthy content that will attract new visitors and subscribers.

Pay Attention to Your Analytics, Visitor tracking software can tell you which posts your audience likes best, which ones don't get viewed and how the search engines are delivering traffic. Use these clues to react and improve your strategies. Feedburner is great for RSS and I'm a personal fan of Indextools. Consider adding action tracking to your blog, so you can see what sources of traffic are bringing the best quality visitors (in terms of time spent on the site, # of page views, etc). I particularly like having the "register" link tagged for analytics so I can see what percentage of visitors from each source is interested enough to want to leave a comment or create an account.

Use a Human Voice, Charisma is a valuable quality, both online and off. Through a blog, it's most often judged by the voice you present to your users. People like empathy, compassion, authority and honesty. Keep these in the forefront of your mind when writing and you'll be in a good position to succeed. It's also critical that you maintain a level of humility in your blogging and stick to your roots. When users start to feel that a blog is taking itself too seriously or losing the characteristics that made it unique, they start to seek new places for content. We've certainly made mistakes (even recently) that have cost us some fans - be cautious to control not only what you say, but how you say it. Lastly - if there's a hot button issue that has you posting emotionally, temper it by letting the post sit in draft mode for an hour or two, re-reading it and considering any revisions. With the advent of feeds, once you publish, there's no going back.

Archive Effectively, The best archives are carefully organized into subjects and date ranges. For search traffic (particularly long tail terms), it can be best to offer the full content of every post in a category on the archive pages, but from a usability standpoint, just linking to each post is far better (possibly with a very short snippet). Balance these two issues and make the decision based on your goals. A last note on archiving - pagination in blogging can be harmful to search traffic, rather than beneficial (as you provide constantly changing, duplicate content pages). Pagination is great for users who scroll to the bottom and want to see more, though, so consider putting a "noindex" in the meta tag or in the robots.txt file to keep spiders where they belong - in the well-organized archive system.

Implement Smart URLs, The best URL structure for blogs is, in my opinion, as short as possible while still containing enough information to make an educated guess about the content you'll find on the page. I don't like the 10 hyphen, lengthy blog titles that are the byproduct of many CMS plugins, but they are certainly better than any dynamic parameters in the URL. Yes - I know I'm not walking the talk here, and hopefully it's something we can fix in the near future. To those who say that one dynamic parameter in the URL doesn't hurt, I'd take issue - just re-writing a ?ID=450 to /450 has improved search traffic considerably on several blogs we've worked with.

Reveal as Much as Possible, The blogosphere is in love with the idea of an open source world on the web. Sharing vast stores of what might ordinarily be considered private information is the rule, rather than the exception. If you can offer content that's usually private - trade secrets, pricing, contract issues, and even the occasional harmless rumor, your blog can benefit. Make a decision about what's off-limits and how far you can go and then push right up to that limit in order to see the best possible effects. Your community will reward you with links and traffic.

Only One Post in Twenty Can Be Linkbait, Not every post is worthy of making it to the top of Digg, Del.icio.us/popular or even a mention at some other blogs in your space. Trying to over-market every post you write will result in pushback and ultimately lead to negative opinions about your efforts. The less popular your blog is, the harder it will be to build excitement around a post, but the process of linkbait has always been trial and error - build, test, refine and re-build. Keep creating great ideas and bolstering them with lots of solid, everyday content and you'll eventually be big enough to where one out of every 20-40 posts really does become linkbait.

Make Effective Use of High Traffic Days, If you do have linkbait, whether by design or by accident, make sure to capitalize. When you hit the front page of Digg, Reddit, Boing Boing, or, on a smaller scale, attract a couple hundred visitors from a bigger blog or site in your space, you need to put your best foot forward. Make sure to follow up on a high traffic time period with 2-3 high quality posts that show off your skills as a writer, your depth of understanding and let visitors know that this is content they should be sticking around to see more of. Nothing kills the potential linkbait "bump" faster than a blog whose content doesn't update for 48 hours after they've received a huge influx of visitors.

Create Expectations and Fulfill Them, When you're writing for your audience, your content focus, post timing and areas of interest will all become associated with your personal style. If you vary widely from that style, you risk alienating folks who've come to know you and rely on you for specific data. Thus, if you build a blog around the idea of being an analytical expert in your field, don't ignore the latest release of industry figures only to chat about an emotional issue - deliver what your readers expect of you and crunch the numbers. This applies equally well to post frequency - if your blog regularly churns out 2 posts a day, having two weeks with only 4 posts is going to have an adverse impact on traffic. That's not to say you can't take a vacation, but you need to schedule it wisely and be prepared to lose RSS subscribers and regulars. It's not fair, but it's the truth. We lose visitors every time I attend an SES conference and drop to one post every two days.

Build a Brand, Possibly one of the most important aspects of all in blogging is brand-building. As Zefrank noted, to be a great brand, you need to be a brand that people want to associate themselves with and a brand that people feel they derive value from being a member. Exclusivity, insider jokes, emails with regulars, the occasional cat post and references to your previous experiences can be off putting for new readers, but they're solid gold for keeping your loyal base feeling good about their brand experience with you. Be careful to stick to your brand - once you have a definition that people like and are comfortable with, it's very hard to break that mold without severe repercussions. If you're building a new blog, or building a low-traffic one, I highly recommend writing down the goals of your brand and the attributes of its identity to help remind you as you write.

Best of luck to all you bloggers out there. It's an increasingly crowded field to play in, but these strategies should help to give you an edge over the competition. As always, if you've got additions or disagreements, I'd love to hear them.

Friday, November 5, 2010

Optimize Adsense Keywords and Earning

Adsense from Google Adsense provides an easy opportunity for bloggers and webmasters to earn revenue from their hard work. To make significant money from your website or blog, the trick is to find the right way to deploy Adsense.

Simply cutting and pasting the Adsense code on to your web pages will not be enough. Try out variations of advert page placements and advert formats. To find the right layout that maximizes Adsense revenue usually takes a while.

One of the keys to a good Adsense click-through rate is to make sure that the ads served are relevant to your website visitors. if the goods and services offered by your advertisers are irrelevant then your visitors will not be interested in visiting their sites. make sure your content is relevant and is targeted at a narrow selection of related topics. Your meta tag keywords must reflect the content of your website.

There are numerous successful methods for increasingAdsense money. A great tip is to place the Adsense adverts on page ‘hotspots’. the top left of a web page is well known to be an effective zone on which to place advertisements. the reason for this is that we all read from top to bottom and from left to right.

Another excellent Google Adsense tip is to locate adverts on your pages with the highest traffic. decide which are the best pages by monitoring your web statistics and locating the main visitor entry pages.

Your Adsense ads should blend in with the rest of your web page. this generally makes the adverts seem like a natural part of your site, and will reassure your visitors and make the site look more professional. Adsense allow website developers with content centric websites to add a substantial revenue stream to their site by adding contextual advertising, which can generate revenue from few pennies a day to thousands of dollars per month.

Some of the websites like Iotaweb.org helps the visitors to search for keywords and key phrases which can produce high dollar contextual advertising. There are approximately 360 Google Adsense Top paying Keywords starting with letter A. Keywords on Google are sorted by highest average CPS. Some of the main keywords are: adverse credit remortgages, at call conference, angeles drug los rehab, at go and so on.

The formula for success is (# of Visitors) X (Click thru %) X (Ad value) = Income.

By what means ads can be created on Google? The process is simple, you create ads and choose keywords, which are words or phrases related your business. When a visitor searches on Google using one of your keywords, your ad may appear next to the search results. You are actually advertising to an audience or the visitor that’s already interested in you.

Visitors simply click your ad to make a purchase or learn more about your product. Actually you don’t even need a webpage to get started – Google helps in creating one for you free of cost.

One can find numerous keyword research tools. Keyword Country is considered to be the most comprehensive and most complete keyword research tool. It connects to major engines for keyword research including Google, MSN, Yahoo, ASK. It steals all the niche keywords that the industry happens to be focusing on.

Its software Google Adsense KeywordGoogle is one of the maximum earning search engines.

With this you can access to:

a) Most profitable High paying Keywords

b) Traffic building keywords

c) High CTR keywords

d) Thousands of Niche keywords from almost 600,000 markets online

e) Increase website traffic, build more content and study the behavior of keywords

Since your aim is to earn money, you have to be more specific in choosing the best keyword to attract organic traffic to earn money through your website. When we think of high paying keywords, the first thing that comes in our mind is CPC – Cost per Click. The maximum amount that an advertiser pays is termed as CPC. The more CPC of the keyword, the more will be the payout for the said keyword. Google has been the most accurate source of CPC.

All the website publishers and advertisers can use Google Adsense to advertise and earn money in return. Content can be advertised without much effort. You can display targeted Google ads on your website’s contents pages and earn valid clicks or impressions. One can utilize the Adsense; through which one can have access to Google’s advertiser’s network, and subsequently display your ads that are targeted to a particular audience which shows interest by knowing the most expensive keywords on the internet; you can create websites and web pages based on these keywords. On these web pages you can show expensive ads and sell your affiliates products or offers to earn money.

VDSL, Improvement of Internet Speed

VDSL/VHDSL (Very High Bitrate Digital Subscriber Line) is an improved version of the technology, ADSL or Asymmetric Digital Subscriber Line, which we use to connect to the internet. They are different in how they are implemented so you probably cannot use the equipment of one for the other. The most significant difference between the two technologies that is most relevant to the use is speed. ADSL can reach maximum speeds of 8mbps download and 1mbps for upload. In comparison, VDSL can have up to 52mbps for download and 16mbps for upload.

Because of the extremely high speeds that VDSL can accommodate, it is being looked at as a good prospective technology for accommodating high bandwidth applications like VoIP telephony and even HDTV transmission, which ADSL is not capable of. Another very useful feature of VDSL stems from the fact that it uses 7 different frequency bands for the transmission of data. The user then has the power to customize whether each frequency band would be used for download or upload. This kind of flexibility is very nice in case you need to host certain files that are to be downloaded by a lot of people.

The most major drawback for VDSL is the distance it needs to be from the telephone exchange. Within 300m, you may still get close to maximum speed but beyond that, the line quality and the speed deteriorates rather quickly. Because of this, ADSL is still preferable unless you live extremely close to the telephone exchange of the company that you are subscribed to. Most VDSL subscribers are companies who need a very fast server and would often place their own servers in very close proximity.

Due to the limitations of VDSL and its high price, its expansion is not as prolific as that of ADSL. VDSL is only widespread in countries like South Korea and Japan. While other countries also have VDSL offerings, it is only handled from a few companies; mostly one or two in most countries. In comparison, ADSL is very widely used and all countries that offer high speed internet offer ADSL.
DSL technology known as very high bit-rate DSL (VDSL) is seen by many as the next step in providing a complete home-communications/entertainment package. There are already some companies, such as U.S. West (part of Qwest now), that offer VDSL service in selected areas. VDSL provides an incredible amount of bandwidth, with speeds up to about 52 megabits per second (Mbps). Compare that with a maximum speed of 8 to 10 Mbps for ADSL or cable modem and it's clear that the move from current broadband technology to VDSL could be as significant as the migration from a 56K modem to broadband. As VDSL becomes more common, you can expect that integrated packages will be cheaper than the total amount for current separate services.

In this article, you'll learn about VDSL technology, why it's important and how it compares to other DSL technologies. But first, let's take a look at the basics of DSL.

A standard telephone installation in the United States consists of a pair of copper wires that the phone company installs in your home. A pair of copper wires has plenty of bandwidth for carrying data in addition to voice conversations. Voice signals use only a fraction of the available capacity on the wires. DSL exploits this remaining capacity to carry information on the wire without disturbing the line's ability to carry conversations.

Standard phone service limits the frequencies that the switches, telephones and other equipment can carry. Human voices, speaking in normal conversational tones, can be carried in a frequency range of 400 to 3,400 Hertz (cycles per second). In most cases, the wires themselves have the potential to handle frequencies of up to several-million Hertz. Modern equipment that sends digital (rather than analog) data can safely use much more of the telephone line's capacity, and DSL does just that.
VDSL could change the face of E-commerce by allowing all types of media to run smoothly and beautifully through your computer.
Wednesday, November 3, 2010

Slow WiFi connection Solution

computers as exhibiting the same slow connectivity, chances are good it has something to do with the WiFi. For example, perhaps the router got moved to a location that's blocking some of the signal.
It could also be that the router is failing, or that more library patrons are sharing a fixed amount of bandwidth (like more cars on a highway leading to slow-moving traffic). Without having more information, it can be tricky to troubleshoot a problem like this.
However, there's one step worth trying for anyone vexed by sluggish WiFi: try a direct connection to the router. (Actually, that should be your second step; the first is to reset both the modem and router.)

In other words, disable your PC's WiFi, then connect it directly to the router using an Ethernet cable. Windows should automatically detect the new connection and get you online accordingly, though you may have to reboot.

Problem solved? If so, you know there's some kind of WiFi issue to blame. If not, the culprit is probably a bad router, bad router settings, or the Internet connection itself (check with your service provider). Space doesn't permit me to address all these possibilities here, but at least you'll have narrowed down the problem.

Tips choose the right VPN need.

If someone need to connect your branch offices, remote workers and telecommuters to your corporate network. They need secure access, 24 hours a day. You know you're going to use some sort of VPN - the question is, which one? IPSec was the big thing until recently, but SSL VPNs have been gaining in popularity in the past while. And are there any other options?

Unfortunately, the answer to which is best is 'it depends', and it's likely that you'll need more than one type to handle all your requirements. So let's do a quick round-up of the pros and cons of your main choices.

SSL
We've written about the benefits of SSL VPN before, but in a nutshell, it creates a secure session from your PC browser to the application server you're accessing. Actually, in most cases, to a proxy server, rather than to the end application.

Using SSL VPNs to enable mobility | Linksys Wireless N Draft 2.0 Wi-Fi Gigabit Security Router with VPN WRVS4400N | Taking the Cisco route to a VPN

Remember that if you're SSL-encrypting traffic end to end, it can't be seen by your firewall, Intrusion Prevention Systems, load balancing devices or any other network management systems. SSL on your servers also adds a fair bit of overhead, so it's probably best to offload this to a proxy anyway, and then route the traffic through your secure corporate LAN.

The upside is that as far as your users are concerned, it's just web access. There's no client software to load, and it can be used anywhere. On the minus side, if you need access to applications that aren't webified, you'll need something to act as an intermediary - that may include your email.

Also, it's all web traffic, so you can forget about Quality of Service or voice, and things like FTP and telnet aren't natively supported, though you should be able to use an applet to forward traffic to the right TCP port number and get access that way. Multicast won't work, and it's not a site-to-site option.

IPSec
Tried and (almost) trusted, IPSec sets up a tunnel from the remote site - either a single user with client software on their PC or a network device terminating the tunnel for a whole office of users - into your central site. Once connected, you access your applications as normal, and it's immaterial whether they're web apps or not.

As the name suggests, it's designed for IP traffic, though that's not so much of an issue nowadays, but if you do have non-IP data, you'd need to configure up GRE tunnels separately and run IPSec over them, as you would to support multicast traffic.

Hybrids
A few companies have managed to combine features of SSL and IPSec, for example Net6 which is now owned by Citrix. Others are working to do the same.

MPLS
Let's not forget MPLS VPNs. They're no good for remote access for individual users, but for site-to-site connectivity, they're the most flexible and scalable option. All the work is at the network level - users just see standard network connectivity - and they support QoS and multicast, so you don't have to worry about which apps people need access to. Of course, an MPLS network isn't as easy to set up or add to as the others, and it's bound to be more expensive.

Remote Users
So for individual users, who may well be travelling, or need access from hotels and Internet cafes, forget MPLS. IPSec is good if you have control over your users' PCs and can manage VPN client downloads and updates. It's also probably the only option for IT support staff, or anyone who needs to be able to access a wide range of applications and services. It scales quite well, and VPN concentrators at the central sites make it reasonably manageable.

SSL comes into its own where you have people accessing your network from non-corporate PCs: partners, suppliers, public Internet-connected PCs, that sort of thing, since there's no client software needed. Where your users just need access to web applications, it's easy, quick and cheap. If you can get an Internet connection, you can get to your data. But it may not handle all the applications your enterprise needs.

Remote Offices
As soon as you have multiple users in one place, though, SSL may not be a good option. More efficient is to have one secure link from your remote site into the central office. If your traffic flows are such that all remote sites access your central site, in a hub-and-spoke arrangement, then IPSec is a good enough option. Your users don't have to bother with any client software, since it's all done in the network.

However, if every branch needs to communicate with every other, building a meshed arrangement is a real pain - especially if you need to set up GRE tunnels for non-IP or multicast traffic. Bear in mind that if you're deploying this and connecting over the Internet, you'll have no QoS guarantees, and your SLAs may not be suitable for business needs.

For large offices, or ones with complex requirements for connectivity or QoS, an MPLS VPN is likely to be your best bet. Even then, you'll need to make sure that your provider can support the levels of QoS you need, knows how to cater for multicast traffic, and can make changes in a sensible timeframe.

It's likely you're going to end up with a mix of VPN types to match your mix of network users. Don't try and force everyone to use the same access method, or you'll end up making life difficult for them and stressful for you. Define several categories of users, match each to the technology that suits it best, and you should find it becomes relatively straightforward to suit most needs.

WiFi WLAN Roaming Basics

Wireless LANs whole point is is the convenience of the mobility you get being able to wander from one part of the office to the other. Users expect the same completely transparent service they get as their mobile phones move from one cell to another, but in the world of 802.11 it’s not actually that easy. There’s a lot of publicity about roaming in Wi-Fi just now, for instance a new IEEE group on testing Wi-Fi has found that it is impossible to compare roaming times without a definition of roaming. While many wireless switch vendors make a point of roaming at Layer 3 (a technology we’ll cover the technicalities of in a later article), several other vendors (such as Bluesocket and Vernier, reviewed here under its HP badge) solve the problem by keeping all access points on a single subnet, so the roaming only happens at Layer 2 and the roaming device keeps the same IP address. What most people miss is that even roaming within a subnet, at Layer 2, has its challenges. What’s involved?

When a WLAN client moves from the range of one Access Point (AP) to another in the same subnet, it needs to find the best AP, decide when to roam onto it, associate with it and do any authentication required, as per your security policies. Then the wired network has to relearn the location of the client, so that data can be sent to it. All of this takes time and this is without the client having to worry about getting a new IP address! The scanning and decision making part of the roaming process (see How to Make your WLAN roam faster) allows the client to find a new AP on an appropriate channel as the user moves. When this happens, the client must associate with the new AP. It must then, assuming that it is an 802.1x supplicant (see The EAP Heap), reauthenticate with the RADIUS server. This is transparent to the user - but the delay in this happening may not be. It can take up to a second for association and authentication to occur (see below for implications and solutions). IAPP
The next part of the process is for the rest of the network to be made aware that the client has shifted. This calls for AP to AP communication, which was never catered for in the original 802.11 spec. Vendors had their own way of passing updates; however 802.11f, the Inter-Access Point Protocol, has now been now published by the IEEE as a trial-use standard - it sits in this state for two years before being submitted as a full-use standard - to facilitate multi-vendor AP interoperability. IAPP calls for the new servicing AP to send out two packets onto the wired LAN. One of these is actually set with the source address of the client (the standard says this should be a broadcast, however some implementations still use unicast to the previous AP or a multicast) and is used by intervening switches to update their MAC address tables with the client’s new location. The other is an IAPP ADD-notify packet from the new AP to an IAPP multicast address that all APs subscribe to, which contains the MAC address of the station it has just associated. All APs will receive this packet, and the one that had been associated with that station will use the sequence number included to determine that this is newer information and remove the stale association from its internal table. IAPP provides for the sharing of information between APs. The format of this information is specified, as "contexts" but the actual content is not defined, so it’s not yet hugely useful as far as vendor interoperability is concerned. Also IAPP has no specific provision for security. Who Cares?
So, worst case, you’re probably looking at about one second where your client can’t be reached over the network. For a lot of clients and applications, this isn’t an issue. If you’re walking from one room to another carrying your laptop, and you want to use email or a web browser, it’s not a problem. In fact, most TCP-based applications will be able to handle this sort of hiccup (remember that in this instance there’s no address change). UDP applications are less able to handle interruptions, and unfortunately, these are the ones where a break would be most noticed by the user. The killer? Voice. Not only is VoWLAN UDP-based for the bearer traffic, but it’s also the one application where you are likely to be using it as you move between APs. And you are definitely going to notice a one second hit. Which is presumably why the vendors that are pushing fast roaming for 802.11 are the ones squarely behind the use of wireless handsets in an IP Telephony environment, such as Cisco, SpectraLink and Symbol. Related standards
In fact these are three of the companies behind the drive for a new IEEE Working Group to create a standard to handle faster Layer 2 roaming. There are several related standards and works-in-progress, but none that actually cover this specific aspect:

* As already discussed, IAPP—802.11f—isn’t designed for speed.
* 802.11i, the security standard (not yet ratified) has provision for secure fast handoff, but it’s too security specific for this requirement.
* 802.11k—Radio Resource Management—might help in that it should cater for faster discovery of APs. Again, not yet finalised.
* 802.21 isn’t specifically for wireless LANs at all. It’s aimed at the handoff between heterogeneous networks (wired, 802.11, Bluetooth) and while it will deal with inter-ESS roaming (ie subnet to subnet in a WLAN), it won’t speed up the Layer 2 process which is needed prior to any Layer 3 interaction. This was the P802 Handoff Study Group, and is just in the process of kicking off now.

Fast roaming now
In the meantime of course, there are proprietary solutions. The two parts that need to be speeded up to cut down outage times are the scanning process (to allow clients to find new suitable APs to associate to), and, specifically for security, a faster way of reauthenicating to cut out the RADIUS request/response process. There are things that can be done to speed up the time it takes for a client to find another suitable AP. An AP can maintain information on its adjacent APs, which it can pass to a client on request—this will give the client a better indication of usable channels to scan, for example. The biggest time saver, however, is reckoned to be in localising the 802.1x authentication process. Cisco has incorporated Fast Secure Roaming into its Wireless Domain Services (WDS) portfolio as part of its Structured Wireless Aware Networking offering, which in effect allows an AP on each local subnet to act as the authenticator for clients. When a client (or other AP) goes through the initial RADIUS authentication, it does it via one AP running WDS. This lets that AP establish shared keys between itself and every other entity in the L2 domain, and allows for quicker reauthentication. Plans are for this capability to be included in Cisco’s router/switch platforms later this year as part of its SWAN development. Symbol provides similar functionality in its hardware, while Airespace) also caters for fast roaming in its wireless switches and appliances, and companies such as Bluesocket, which use gateways to control pretty dumb APs, manage everything centrally. Proxim handles things differently, pre-authenticating clients to nearby APs as well as the one currently in use in preparation for the client moving. So before you get excited about Layer 3 roaming, make sure you understand how your vendor of choice implements it at Layer 2. If that bit’s not fast enough to stop you losing traffic, you’ll never be able to move across subnets. It’s likely to be years before there’s a usable standard in place and in the meantime while you can probably get APs from different vendors to work together, there’s no guarantee of interoperability if you want to turn on their various fast roaming options.
Sunday, October 31, 2010

WiFi peer-to-peer Direct is Go

Wi-Fi Alliance on Monday announced that its direct peer-to-peer networking version of WiFi, called WiFi Direct, is now available on several new WiFi devices. The Alliance is also announcing that it has begun the process of certifying devices for WiFi Direct compatibility.

The organization has already certified a handful of WiFi cards from Atheros, Broadcom, Intel, Ralink, Realtek, and Cisco, as well as the Cisco Aironet 1240 Series access points. These devices will also be used in the test suite to certify that future devices are compatible with the protocol. Any device passing the tests will be designated "Wi-Fi CERTIFIED Wi-Fi Direct."

"We designed Wi-Fi Direct to unleash a wide variety of applications which require device connections, but do not need the internet or even a traditional network," said Edgar Figueroa, CEO of the Wi-Fi Alliance, in a statement. The certification program will ensure compatibility with the standard across a range of devices. WiFi Direct devices can also connect to older "Wi-Fi CERTIFIED" devices for backward compatibility, so chances are your current equipment will work with newer devices using the protocol.

The new protocol allows compatible devices to connect in a peer-to-peer fashion, either one-to-one or in a group, to share data with each other. The Alliance noted that many users carry a lot of data with them on portable devices like smartphones; WiFi Direct will enable users to connect these devices with each other to share that data without the need for a local WiFi network.

Though ad-hoc WiFi and Bluetooth protocols serve similar purposes, WiFi direct offers the longest range and fastest throughput, and includes enterprise-class management and security features.

Windows 7and Server 2008 R2 Patch Detail

Microsoft has released a number of non-security updates, the majority of which are for the latest versions of its client and server operating systems. All the patches are available on Windows Update and the Microsoft Download Center and most will require a restart. With the exception of the last patch, they're all for Windows 7 or Windows Server 2008 R2.

Most of these updates will be rolled into Service Pack 1 for Windows 7 and Windows Server 2008 R2. Testers got the first Windows 7 SP1 beta build two months ago, but just today Microsoft sent out build 7601.17077 to selected PC and Technology Adoption Program partners, according to ZDNet.

The first patch (KB2028560) is vaguely described as one that delivers "new functionality and performance improvements for the graphics platform."

The second patch (KB2249857) describes an issue that occurs on 2TB+ hard disk drives. If the OS is configured to save dump files to a volume of such an HDD, some of the dump file is offset at a disk offset greater than the 2TB address, and Windows is either put into hibernation or crashes, volumes on the HDD may be corrupted, and data is lost. If the corrupted volumes include the system partition, the computer will no longer boot.

The third patch (KB982110) fixes a problem when running 32-bit applications on a 64-bit edition of Windows 7 or Windows Server 2008 R2. If the application uses the QueryPathOfRegTypeLib function to retrieve the path of a registered type library, it may return the path of the 64-bit version of the type library instead of the 32-bit one.

The fourth patch (KB2272691) is for a game, application, or firmware that is either installed incorrectly, causes system instability, or has primary functions that do not work correctly. The update will either prevent incompatible software from running (hard block with third-party manufacturer consent), notify the user that incompatible software is starting to run (soft block), or improve the software's functionality (update). It lists just a single application (Sensible Vision FastAccess) as being affected.

The fifth patch (KB2203330) solves a problem when installing a third-party application for the multiple transport Media Transfer Protocol (MTP) device or for the Windows Portable Device (WPD). Connecting an MTP or WPD device may result in an APC_INDEX_MISMATCH stop error message because of a race condition in the Compositebus.sys driver.

The last patch (KB979453) is for Windows Home Server and addresses five separate issues that were found since the release of WHS Power Pack 3.

Microsoft Windows Azure Future Concept

Microsoft unveiled its roadmap for the Windows Azure cloud computing platform. Moving beyond mere Infrastructure-as-a-Service (IaaS), the company is positioning Windows Azure as a Platform-as-a-Service offering: a comprehensive set of development tools, services, and management systems to allow developers to concentrate on creating available, scalable applications.

Over the next 12-18 months, a raft of new functionality will be rolled out to Windows Azure customers. These features will both make it easier to move existing applications into the cloud, and enhance the services available to cloud-hosted applications.

The company believes that putting applications into the cloud will often be a multistage process. Initially, the applications will run unmodified, which will remove patching and maintenance burdens, but not take advantage of any cloud-specific functionality. Over time, the applications will be updated and modified to start to take advantage of some of the additional capabilities that the Windows Azure platform has to offer.

Microsoft is building Windows Azure into an extremely complete cloud platform. Windows Azure currently takes quite a high-level approach to cloud services: applications have limited access to the underlying operating system, and software that requires Administrator installation isn't usable. Later in the year, Microsoft will enable Administrator-level access and Remote Desktop to Windows Azure instances.

For even more compatibility with existing applications, a new Virtual Machine role is being introduced. This will allow Windows Azure users to upload VHD virtual disks and run these virtual machines in the cloud. In a similar vein, Server Application Virtualization will allow server applications to be deployed to the cloud, without the need either to rewrite them or package them within a VHD. These features will be available in beta by the end of the year. Next year, virtual machine construction will be extended to allow the creation of virtual machines within the cloud. Initially, virtual machine roles will support Windows Server 2008 R2; in 2011, Windows Server 2003 and Windows Server 2008 with Service Pack 2 will also be supported.

Microsoft also has a lot to offer for applications that are cloud-aware. Over the past year, SQL Azure, the cloud-based SQL Server version, has moved closer to feature parity with its conventional version: this will continue with the introduction of SQL Azure Reporting, bringing SQL Server's reporting features to the cloud. New data syncing capabilities will also be introduced, allowing SQL Azure to replicate data with on-premises and mobile applications. Both of these will be available in previews by the end of the year, with final releases in 2011.

A range of new building-block technologies are also being introduced, including a caching component (similar to systems such as memcached) and a message bus (for reliable delivery of messages to and from other applications or mobile devices). A smaller, cheaper tier of Windows Azure instances is also being introduced, comparable to Amazon's recently-released Micro instances of EC2.

The breadth of services that Microsoft is building for the Windows Azure platform is substantial. Compared to Amazon's EC2 or Google's AppEngine, Windows Azure is becoming a far more complete platform: while EC2 and AppEngine both offer a few bits and pieces that are comparable (EC2 is particularly strong at hosting existing applications in custom virtual machines, for example), they aren't offering the same cohesive set of services.

Nonetheless, there are still areas that could be improved. The billing system is currently inflexible, and offers no ability for third parties to integrate with the existing Windows Azure billing. This means that a company wishing to offer its own building blocks for use by Windows Azure applications has to also implement its own monitoring and billing system. Windows Azure also has no built-in facility for automating job management and scaling.

Both of these gaps were pertinent to one of yesterday's demonstrations. Animation studio Pixar has developed a prototype version of its RenderMan rendering engine that works on Windows Azure. Traditionally, RenderMan was only accessible to the very largest animation studios, as it requires considerable investment in hardware to build render farms. By moving RenderMan to the cloud, smaller studios can use RenderMan for rendering jobs without having to maintain all those systems. It allows RenderMan to be sold as a service to anyone needing rendering capabilities.

Neither job management—choosing when to spin up extra instances, when to power them down, how to spread the different frames that need rendering between instances—nor billing are handled by Windows Azure itself. In both cases, Pixar needed to develop its own facilities. Microsoft recognizes that these are likely to be useful to a broad range of applications, and as such good candidates for a Microsoft-provided building block. But at the moment, they're not a part of the platform.

Microsoft CEO Steve Ballmer has said that Microsoft is "all in" with the cloud. The company is certainly working hard to make Windows Azure a better platform, and the commitment to the cloud extends beyond the Windows Azure team itself. Ars was told that all new development of online applications within Microsoft was using Windows Azure, and with few exceptions, existing online applications had migration plans that would be implemented in the next two years. The two notable exceptions are Hotmail and Bing, both of which already have their own, custom-built, dedicated server farms.

This internal commitment is no surprise given the history of the platform. Windows Azure was originally devised and developed to be an internal platform for application hosting. However, before there was any significant amount of internal usage, the company decided to offer it as a service to third parties. Now that the platform has matured, those internal applications are starting to migrate over. As such, this makes Windows Azure, in a sense, the opposite to both EC2 and AppEngine. Those products were a way for Amazon and Google to monetize their preexisting infrastructure investment—investment that had to be made simply to run the companies' day-to-day business.

With the newly announced features, there's no doubt that Windows Azure is shaping up to be a cloud computing platform that is both powerful and flexible. Microsoft is taking the market seriously, and its "all in" position seems to represent a genuine commitment to the cloud. What remains to be seen is whether this dedication will be matched by traditionally conservative businesses and developers, especially among small and medium enterprises. A move to the cloud represents a big change in thinking, and the new Windows Azure features will do nothing to assuage widespread fears such as a perceived loss of control. It is this change in mindset, not any technological issue, that represents the biggest barrier to widespread adoption of Windows Azure, and how Microsoft aims to tackle the problem is not yet clear.

Friday, October 29, 2010

Bassic Setting DNS Domain Server

The Domain Name System is the software that lets you have name to number mappings on your computers. The name decel.ecel.uwa.edu.au is the number 130.95.4.2 and vice versa. This is achieved through the DNS. The DNS is a heirarchy. There are a small number of root domain name servers that are responsible for tracking the top level domains and who is under them. The root domain servers between them know about all the people who have name servers that are authoritive for domains under the root.

Being authoritive means that if a server is asked about something in that domain, it can say with no ambiguity whether or not a given piece of information is true. For example. We have domains x.z and y.z. There are by definition authoritive name servers for both of these domains and we shall assume that the name server in both of these cases is a machine called nic.x.z and nic.y.z but that really makes no difference.

If someone asks nic.x.z whether there is a machine called a.x.z, then nic.x.z can authoritively say, yes or no because it is the authoritive name server for that domain. If someone asks nic.x.z whether there is a machine called a.y.z then nic.x.z asks nic.y.z whether such a machine exists (and caches this for future requests). It asks nic.y.z because nic.y.z is the authoritive name server for the domain y.z. The information about authoritive name servers is stored in the DNS itself and as long as you have a pointer to a name server who is more knowledgable than yourself then you are set.

When a change is made, it propogates slowly out through the internet to eventually reach all machines. The following was supplied by Mark Andrews Mark.Andrews@syd.dms.csiro.au.

If both the primary and all secondaries are up and talking when a zone update occurs and for the refresh period after the update the old data will live for max(refresh + mininum) average (refresh/2 +mininum) for the zone. New information will be available from all servers after refresh.

So with a refresh of 3 hours and a minimum of a day, you can expect everything to be working a day after it is changed. If you have a longer minimum, it may take a couple of days before things return to normal.

There is also a difference between a zone and a domain. The domain is the entire set of machines that are contained within an organisational domain name. For example, the domain uwa.edu.au contains all the machines at the University of Western Australia. A Zone is the area of the DNS for which a server is responsible. The University of Western Australia is a large organisation and trying to track all changes to machines at a central location would be difficult. The authoritive name server for the zone uwa.edu.au delegates the authority for the zone ecel.uwa.edu.au to decel.ecel.uwa.edu.au. Machine foo.ecel.uwa.edu.au is in the zone that decel is authoritive for. Machine bar.uwa.edu.au is in the zone that uniwa.uwa.edu.au is authoritive for.

2 Installing the DNS:

First I'll assume you already have a copy of the Domain Name Server software. It is probably called named or in.named depending on your flavour of unix. I never had to get a copy, but if anyone thinks that information should be here then by all means tell me and I'll put it in. If you intend on using the package called BIND, then you should be sure that you get version 4.9.x, which is the most recent version at this point in time.

For more information on the latest version of BIND you should take a look at Internet Software Consortium which sponsors the development of BIND. - Kavli

2.1 The Boot File:

First step is to create the file named.boot. This describes to named (we'll dispense with the in.named. Take them to be the same) where the information that it requires can be found. This file is normally found in /etc/named.boot and I personally tend to leave it there because then I know where to find it. If you don't want to leave it there but place it in a directory with the rest of your named files, then there is usually an option on named to specify the location of the boot file.

An alternative is of course to make a symbolic link from /etc/named.boot to the wanted directory. - Kavli

Your typical boot file will look like this if you are an unimportant leaf node and there are other name servers at your site.

directory /etc/namedfiles

cache . root.cache
primary ecel.uwa.edu.au ecel.uwa.domain
primary 0.0.127.in-addr.arpa 0.0.127.domain
primary 4.95.130.in-addr.arpa 4.95.130.domain
forwarders 130.95.128.1

Here is an alternative layout used by Christophe Wolfhugel <Christophe.Wolfhugel@grasp.insa-lyon.fr> He finds this easier because of the large number of domains he has. The structure is essentially the same, but the file names use the domain name rather than the IP subnet to describe the contents.

directory /usr/local/etc/bind
cache . p/root
forwarders 134.214.100.1 192.93.2.4
;
; Primary servers
;
primary fr.net p/fr.net
primary frmug.fr.net p/frmug.fr.net
primary 127.in-addr.arpa p/127
;
; Secondary servers
;
secondary ensta.fr 147.250.1.1 s/ensta.fr
secondary gatelink.fr.net 134.214.100.1 s/gatelink.fr.net
secondary insa-lyon.fr 134.214.100.1 s/insa-lyon.fr
secondary loesje.org 145.18.226.21 s/loesje.org
secondary nl.loesje.org 145.18.226.21 s/nl.loesje.org
secondary pcl.ac.uk 161.74.160.5 s/pcl.ac.uk
secondary univ-lyon1.fr 134.214.100.1 s/univ-lyon1.fr
secondary wmin.ac.uk 161.74.160.5 s/wmin.ac.uk
secondary westminster.ac.uk 161.74.160.5 s/westminster.ac.uk
;
;
; Secondary for addresses
;
secondary 74.161.in-addr.arpa 161.74.160.5 s/161.74
secondary 214.134.in-addr.arpa 134.214.100.1 s/134.214
secondary 250.147.in-addr.arpa 147.250.1.1 s/147.250
;
; Classes C
;
secondary 56.44.192.in-addr.arpa 147.250.1.1 s/192.44.56
secondary 57.44.192.in-addr.arpa 147.250.1.1 s/192.44.57

The lines in the named.boot file have the following meanings.

directory

This is the path that named will place in front of all file names referenced from here on. If no directory is specified, it looks for files relative to /etc.

cache

This is the information that named uses to get started. Named must know the IP number of some other name servers at least to get started. Information in the cache is treated differently depending on your version of named. Some versions of named use the information included in the cache permenantly and others retain but ignore the cache information once up and running.

Be sure you get an up-to-date cache-file. An obsolete cache file is a good source of problems. - Kavli

primary

This is one of the domains for which this machine is authorative for. You put the entire domain name in. You need forwards and reverse lookups. The first value is the domain to append to every name included in that file. (There are some exceptions, but they will be explained later) The name at the end of the line is the name of the file (relative to /etc of the directory if you specified one). The filename can have slashes in it to refer to subdirectories so if you have a lot of domains you may want to split it up.

BE VERY CAREFUL TO PUT THE NUMBERS BACK TO FRONT FOR THE REVERSE LOOK UP FILE. The example given above is for the subnet ecel.uwa.edu.au whose IP address is 130.95.4.*. The reverse name must be 4.95.130.in-addr.arpa. It must be backwards and it must end with .in-addr.arpa. If your reverse name lookups don't work, check this. If they still don't work, check this again.

forwarders

This is a list of IP numbers for forward requests for sites about which we are unsure. A good choice here is the name server which is authoritive for the zone above you.

secondary (This line is not in the example, but is worth mentioning.)

A secondary line indicates that you wish to be a secondary name server for this domain. You do not need to do this usually. All it does is help make the DNS more robust. You should have at least one secondary server for your site, but you do not need to be a secondary server for anyone else. You can by all means, but you don't need to be. If you want to be a secondary server for another domain, then place the line

secondary gu.uwa.edu.au 130.95.100.3 130.95.128.1 sec/gu.uwa.edu.au

in your named.boot. This will make your named try the servers on both of the machines specified to see if it can obtain the information about those domains. You can specify a number of IP addresses for the machines to query that probably depends on your machine. Your copy of named will upon startup go and query all the information it can get about the domain in question and remember it and act as though it were authoritive for that domain.

Next you will want to start creating the data files that contain the name definitions.

2.2 The cache file:

You should always use the latest cache file. The simplest way to do this is by using dig(1) this way:

dig @ns.internic.net . ns > root.cache

You can also get a copy of the cache file by ftp'ing FTP.RS.INTERNIC.NET.

An example of a cache file is located in Appendix A.

2.3 The Forward Mapping file:

The file ecel.uwa.edu.au. will be used for the example with a couple of machines left in for the purpose of the exercise. Here is a copy of what the file looks like with explanations following.

; Authoritative data for ecel.uwa.edu.au
;
@ IN SOA decel.ecel.uwa.edu.au. postmaster.ecel.uwa.edu.au. (
93071200 ; Serial (yymmddxx)
10800 ; Refresh 3 hours
3600 ; Retry 1 hour
3600000 ; Expire 1000 hours
86400 ) ; Minimum 24 hours
IN A 130.95.4.2
IN MX 100 decel
IN MX 150 uniwa.uwa.edu.au.
IN MX 200 relay1.uu.net.
IN MX 200 relay2.uu.net.

localhost IN A 127.0.0.1

decel IN A 130.95.4.2
IN HINFO SUN4/110 UNIX
IN MX 100 decel
IN MX 150 uniwa.uwa.edu.au.
IN MX 200 relay1.uu.net
IN MX 200 relay2.uu.net

gopher IN CNAME decel.ecel.uwa.edu.au.

accfin IN A 130.95.4.3
IN HINFO SUN4/110 UNIX
IN MX 100 decel
IN MX 150 uniwa.uwa.edu.au.
IN MX 200 relay1.uu.net
IN MX 200 relay2.uu.net

chris-mac IN A 130.95.4.5
IN HINFO MAC-II MACOS

The comment character is ';' so the first two lines are just comments indicating the contents of the file.

All values from here on have IN in them. This indicates that the value is an InterNet record. There are a couple of other types, but all you need concern yourself with is internet ones.

The IN type is default and can safely be omitted. It looks better without them I think.
- Kavli

The SOA record is the Start Of Authority record. It contains the information that other nameservers will learn about this domain and how to treat the information they are given about it. The '@' as the first character in the line indicates that you wish to define things about the domain for which this file is responsible. The domain name is found in the named.boot file in the corresponding line to this filename. All information listed refers to the most recent machine/domain name so all records from the '@' until 'localhost' refer to the '@'. The SOA record has 5 magic numbers. First magic number is the serial number. If you change the file, change the serial number. If you don't, no other name servers will update their information. The old information will sit around for a very long time.

Refresh is the time between refreshing information about the SOA. Retry is the frequency of retrying if an authorative server cannot be contacted. Expire is how long a secondary name server will keep information about a zone without successfully updating it or confirming that the data is up to date. This is to help the information withstand fairly lengthy downtimes of machines or connections in the network without having to recollect all the information. Minimum is the default time to live value handed out by a nameserver for all records in a zone without an explicit TTL value. This is how long the data will live after being handed out. The two pieces of information before the 5 magic numbers are the machine that is considered the origin of all of this information. Generally the machine that is running your named is a good one for here. The second is an email address for someone who can fix any problems that may occur with the DNS. Good ones here are postmaster, hostmaster or root. NOTE: You use dots and not '@' for the email address.

eg: root.decel.ecel.uwa.edu.au is correct
and
root@decel.ecel.uwa.edu.au is incorrect.

If your name contains a dot: E.g. Ronny.Kavli@mailhost.somedomain.there. - you must escape the dot -> Ronny\.Kavli.mailhost.somedomain.there. - But, if possible, you should create a mailalias instead. That way, related mail can go to more than one person. - Kavli

We now have an address to map ecel.uwa.edu.au to. The address is 130.95.4.2 which happens to be decel, our main machine. If you try to find an IP number for the domain ecel.uwa.edu.au it will get you the machine decel.ecel.uwa.edu.au's IP number. This is a nicety which means that people who have non-MX record mailers can still mail fred@ecel.uwa.edu.au and don't have to find the name of a machine name under the domain to mail.

Now we have a couple of MX records for the domain itself. The MX records specify where to send mail destined for the machine/domain that the MX record is for. In this case we would prefer if all mail for fred@ecel.uwa.edu.au is sent to decel.ecel.uwa.edu.au. If that does not work, we would like it to go to uniwa.uwa.edu.au because there are a number of machines that might have no idea how to get to us, but may be able to get to uniwa. And failing that, try the site relay1.uu.net. A small number indicates that this site should be tried first. The larger the number the further down the list of sites to try the site is. NOTE: Not all machines have mailers that pay attention to MX records. Some only pay attention to IP numbers, which is really stupid. All machines are required to have MX-capable Mail Transfer Agents (MTA) as there are many addresses that can only be reached via this means.

Do not point an MX record to a CNAME record. A lot of mailers don't handle this. Add another A-record to it instead, but let the reverse table point to the real name. In other words: Don't add a PTR record to it. - Kavli

There is an entry for localhost now. Note that this is somewhat of a kludge and should probably be handled far more elegantly. By placing localhost here, a machine comes into existance called localhost.ecel.uwa.edu.au. If you finger it, or telnet to it, you get your own machine, because the name lookup returns 127.0.0.1 which is the special case for your own machine. I have used a couple of different DNS packages. The old BSD one let you put things into the cache which would always work, but would not be exported to other nameservers. In the newer Sun one, they are left in the cache and are mostly ignored once named is up and running. This isn't a bad solution, its just not a good one.

Decel is the main machine in our domain. It has the IP number 130.95.4.2 and that is what this next line shows. It also has a HINFO entry. HINFO is Host Info which is meant to be some sort of an indication of what the machine is and what it runs. The values are two white space seperated values. First being the hardware and second being the software. HINFO is not compulsory, its just nice to have sometimes. We also have some MX records so that mail destined for decel has some other avenues before it bounces back to the sender if undeliverable.

It is a good idea to give all machines capable of handling mail an MX record because this can be cached on remote machines and will help to reduce the load on the network.

gopher.ecel.uwa.edu.au is the gopher server in our division. Now because we are cheapskates and don't want to go and splurge on a seperate machine just for handling gopher requests we have made it a CNAME to our main machine. While it may seem pointless it does have one main advantage. When we discover that our placing terrabytes of popular quicktime movies on our gopher server (no we haven't and we don't intend to) causes an unbearable load on our main machine, we can quickly move the CNAME to point at a new machine by changing the name mentioned in the CNAME. Then the slime of the world can continue to get their essential movies with a minimal interuption to the network. Other good CNAMEs to maintain are things like ftp, mailhost, netfind, archie, whois, and even dns (though the most obvious use for this fails). It also makes it easier for people to find these services in your domain.

Regarding CNAME from dns: NS records must point to A records. Same for MX records. - Kavli

We should probably start using WKS records for things like gopher and whois rather than making DNS names for them. The tools are not in wide circulation for this to work though. (Plus all those comments in many DNS implementation of "Not implemented" next to the WKS record)

WKS == Well Known Services. - The different services a host is providing
- Kavli

Finally we have a macintosh which belongs to my boss. All it needs is an IP number, and we have included the HINFO so that you can see that it is in fact a macII running a Mac System. To get the list of preferred values, you should get a copy of RFC 1340. It lists lots of useful information such as /etc/services values, ethernet manufacturer hardware addresses, HINFO defualts and many others. I will include the list as it stands at the moment, but if any RFC superceeds 1340, then it will have a more complete list. See Appendix B for that list.

NOTE: If Chris had a very high profile and wanted his mac to appear like a fully connected unix machine as far as internet services were concerned, he could simply place an MX record such as

IN MX 100 decel

after his machine and any mail sent to chris@chris-mac.ecel.uwa.edu.au would be automatically rerouted to decel.

2.4 The Reverse Mapping File

The reverse name lookup is handled in a most bizarre fashion. Well it all makes sense, but it is not immediately obvious.

All of the reverse name lookups are done by finding the PTR record associated with the name w.x.y.z.in-addr.arpa. So to find the name associated with the IP number 1.2.3.4, we look for information stored in the DNS under the name 4.3.2.1.in-addr.arpa. They are organised this way so that when you are allocated a B class subnet for example, you get all of the IP numbers in the domain 130.95. Now to turn that into a reverse name lookup domain, you have to invert the numbers or your registered domains will be spread all over the place. It is a mess and you need not understand the finer points of it all. All you need to know is that you put the reverse name lookup files back to front.

Here is the sample reverse name lookup files to go with our example.

0.0.127.in-addr.arpa
--
; Reverse mapping of domain names 0.0.127.in-addr.arpa
; Nobody pays attention to this, it is only so 127.0.0.1 -> localhost.
@ IN SOA decel.ecel.uwa.edu.au. postmaster.ecel.uwa.edu.au. (
91061801 ; Serial (yymmddxx)
10800 ; Refresh 3 hours
3600 ; Retry 1 hour
3600000 ; Expire 1000 hours
86400 ) ; Minimum 24 hours
;
1 IN PTR localhost.ecel.uwa.edu.au.
--

4.95.130.in-addr.arpa
--
; reverse mapping of domain names 4.95.130.in-addr.arpa
;
@ IN SOA decel.ecel.uwa.edu.au. postmaster.ecel.uwa.edu.au. (
92050300 ; Serial (yymmddxx format)
10800 ; Refresh 3hHours
3600 ; Retry 1 hour
3600000 ; Expire 1000 hours
86400 ) ; Minimum 24 hours
2 IN PTR decel.ecel.uwa.edu.au.
3 IN PTR accfin.ecel.uwa.edu.au.
5 IN PTR chris-mac.ecel.uwa.edu.au.
--

It is important to remember that you must have a second start of authority record for the reverse name lookups. Each reverse name lookup file must have its own SOA record. The reverse name lookup on the 127 domain is debatable seeing as there is likely to be only one number in the file and it is blatantly obvious what it is going to map to.

In general: Each primary file pointed to in named.boot should have one - and only one - SOA record.
- Kavli

The SOA details are the same as in the forward mapping.

Each of the numbers listed down the left hand side indicates that the line contains information for that number of the subnet. Each of the subnets must be the more significant digits. eg the 130.95.4 of an IP number 130.95.4.2 is implicit for all numbers mentioned in the file.

The PTR must point to a machine that can be found in the DNS. If the name is not in the DNS, some versions of named just bomb out at this point.

Reverse name lookups are not compulsory, but nice to have. It means that when people log into machines, they get names indicating where they are logged in from. It makes it easier for you to spot things that are wrong and it is far less cryptic than having lots of numbers everywhere. Also if you do not have a name for your machine, some brain dead protocols such as talk will not allow you to connect.

Since I had this I had one suggestion of an alternative way to do the localhost entry. I think it is a matter of personal opinion so I'll include it here in case anyone things that this is a more appropriate method.

The following is courtesy of jep@convex.nl (JEP de Bie)

The way I did it was:

1) add in /etc/named.boot:

primary . localhost primary 127.in-addr.ARPA. IP127

(Craig: It has been suggested by Mark Andrews that this is a bad practice particularly if you have upgraded to Bind 4.9. You also run the risk of polluting the root name servers. This comes down to a battle of idealogy and practicality. Think twice before declaring yourself authorative for the root domain.)

So I not only declare myself (falsely? - probably, but nobody is going to listen anyway most likely [CPR]:-) athorative in the 127.in-addr.ARPA domain but also in the . (root) domain.

2) the file localhost has:

$ORIGIN .
localhost IN A 127.0.0.1

3) and the file IP127:

$ORIGIN 127.in-addr.ARPA.
1.0.0 IN PTR localhost.

4) and I have in my own domain file (convex.nl) the line:

$ORIGIN convex.nl.
localhost IN CNAME localhost.

The advantage (elegancy?) is that a query (A) of localhost. gives the reverse of the query of 1.0.0.127.in-addr.ARPA. And it also shows that localhost.convex.nl is only a nickname to something more absolute. (While the notion of localhost is of course relative :-)).

And I also think there is a subtle difference between the lines

primary 127.in-addr.ARPA. IP127
and
primary 0.0.127.in-addr.ARPA. 4.95.130.domain
=============
JEP de Bie
jep@convex.nl
=============


3 Delegating authority for domains within your domain

When you start having a very big domain that can be broken into logical and seperate entities that can look after their own DNS information, you will probably want to do this. Maintain a central area for the things that everyone needs to see and delegate the authority for the other parts of the organisation so that they can manage themselves.

Another essential piece of information is that every domain that exists must have its NS records associated with it. These NS records denote the name servers that are queried for information about that zone. For your zone to be recognised by the outside world, the server responsible for the zone above you must have created a NS record for your machine in your domain. For example, putting the computer club onto the network and giving them control over their own part of the domain space we have the following:

The machine authorative for gu.uwa.edu.au is mackerel and the machine authorative for ucc.gu.uwa.edu.au is marlin.

in mackerel's data for gu.uwa.edu.au we have the following

@ IN SOA ...
IN A 130.95.100.3
IN MX mackerel.gu.uwa.edu.au.
IN MX uniwa.uwa.edu.au.

marlin IN A 130.95.100.4

ucc IN NS marlin.gu.uwa.edu.au.
IN NS mackerel.gu.uwa.edu.au.

Marlin is also given an IP in our domain as a convenience. If they blow up their name serving there is less that can go wrong because people can still see that machine which is a start. You could place "marlin.ucc" in the first column and leave the machine totally inside the ucc domain as well.

The second NS line is because mackerel will be acting as secondary name server for the ucc.gu domain. Do not include this line if you are not authorative for the information included in the sub-domain.

4 Troubleshooting your named:

4.1 Named doesn't work! What is wrong?

Step 1: Run nslookup and see what nameserver it tries to connect you to. If nslookup connects you to the wrong nameserver, create a /etc/resolv.conf file that points your machine at the correct nameserver. If there is no resolv.conf file, the the resolver uses the nameserver on the local machine.

Step 2: Make sure that named is actually running.

Step 3: Restart named and see if you get any error messages on the console and in also check /usr/adm/messages.

Step 4: If named is running, nslookup connects to the appropriate nameserver and nslookup can answer simple questions, but other programs such as 'ping' do not work with names, then you need to install resolv+ most likely.

4.2 Local has noticed change, but nobody else has new info

I changed my named database and my local machine has noticed, but nobody else has the new information?

Change the serial number in the SOA for any domains that you modified and restart named. Wait an hour and check again. The information propogates out. It won't change immediately.

4.3 I can see their info, but they can't see mine

My local machine knows about all the name server information, but no other sites know about me?

Find an upstream nameserver (one that has an SOA for something in your domain) and ask them to be a secondary name server for you. eg if you are ecel.uwa.edu.au, ask someone who has an SOA for the domain uwa.edu.au. Get NS records (and glue) added to your parent zone for your zone. This is called delegating. It should be done formally like this or you will get inconsistant answers out of the DNS. ALL NAMSERVERS FOR YOUR ZONE SHOULD BE LISTED IN THIS MANNER.

4.4 Forward domain works, but not backwards

My forward domain names work, but the backward names do not?

Make sure the numbers are back to front and have the in-addr.arpa on the end.

Make sure your reverse zone is registered. For Class C nets this can be done by mailing to hostmaster@internic.net. For class A & B nets make sure that you are registeres with the primary for your net and that the net itself is registered with hostmaster@internic.net.

5 How to get useful information from nslookup:

Nslookup is a very useful program but I'm sure there are less than 20 people worldwide who know how to use it to its full usefulness. I'm most certainly not one of them. If you don't like using nslookup, there is at least one other program called dig, that has most/all(?) of the functionality of nslookup and is a hell of a lot easier to use.

I won't go into dig much here except to say that it is a lot easier to get this information out of. I won't bother because nslookup ships with almost all machines that come with network software.

To run nslookup, you usually just type nslookup. It will tell you the server it connects to. You can specify a different server if you want. This is useful when you want to tell if your named information is consistent with other servers.

5.1 Getting name to number mappings

Type the name of the machine. Typing 'decel' is enough if the machine is local.

(Once you have run nslookup successfully)

> decel
Server: ecel.uwa.edu.au
Address: 130.95.4.2

Name: decel.ecel.uwa.edu.au
Address: 130.95.4.2

>

One curious quirk of some name resolvers is that if you type a machine name, they will try a number of permutations. For example if my machine is in the domain ecel.uwa.edu.au and I try to find a machine called fred, the resolver will try the following.

fred.ecel.uwa.edu.au.
fred.uwa.edu.au.
fred.edu.au.
fred.au.
fred.

This can be useful, but more often than not, you would simply prefer a good way to make aliases for machines that are commonly referenced. If you are running resolv+, you should just be able to put common machines into the host file.

DIG: dig <machine name>

5.2 Getting number to name mappings

Nslookup defaults to finding you the Address of the name specified. For reverse lookups you already have the address and you want to find the name that goes with it. If you read and understood the bit above where it describes how to create the number to name mapping file, you would guess that you need to find the PTR record instead of the A record. So you do the following.

> set type=ptr
> 2.4.95.130.in-addr.arpa
Server: decel.ecel.uwa.edu.au
Address: 130.95.4.2

2.4.95.130.in-addr.arpa host name = decel.ecel.uwa.edu.au
>

nslookup tells you that the ptr for the machine name 2.4.95.130.in-addr.arpa points to the host decel.ecel.uwa.edu.au.

DIG: dig -x <machine number>

5.3 Finding where mail goes when a machine has no IP number

When a machine is not IP connected, it needs to specify to the world, where to send the mail so that it can dial up and collect it every now and then. This is accomplished by setting up an MX record for the site and not giving it an IP number. To get the information out of nslookup as to where the mail goes, do the following.

> set type=mx
> dialix.oz.au
Server: decel.ecel.uwa.oz.au
Address: 130.95.4.2

Non-authoritative answer:
dialix.oz.au preference = 100, mail exchanger = uniwa.uwa.OZ.AU
dialix.oz.au preference = 200, mail exchanger = munnari.OZ.AU
Authoritative answers can be found from:
uniwa.uwa.OZ.AU inet address = 130.95.128.1
munnari.OZ.AU inet address = 128.250.1.21
munnari.OZ.AU inet address = 192.43.207.1
mulga.cs.mu.OZ.AU inet address = 128.250.35.21
mulga.cs.mu.OZ.AU inet address = 192.43.207.2
dmssyd.syd.dms.CSIRO.AU inet address = 130.155.16.1
ns.UU.NET inet address = 137.39.1.3

You tell nslookup that you want to search for mx records and then you give it the name of the machine. It tells you the preference for the mail (small means more preferable), and who the mail should be sent to. It also includes sites that are authorative (have this name in their named database files) for this MX record. There are multiple sites as a backup. As can be seen, our local public internet access company dialix would like all of their mail to be sent to uniwa, where they collect it from. If uniwa is not up, send it to munnari and munnari will get it to uniwa eventually.

NOTE: For historical reasons Australia used to be .oz which was changed to .oz.au to move to the ISO standard extensions upon the advent of IP. We are now moving to a more normal heirarchy which is where the .edu.au comes from. Pity, I liked having oz.

DIG: dig <zone> mx

5.4 Getting a list of machines in a domain from nslookup

Find a server that is authorative for the domain or just generally all knowing. To find a good server, find all the SOA records for a given domain. To do this, you set type=soa and enter the domain just like in the two previous examples.

Once you have a server type

> ls gu.uwa.edu.au.
[uniwa.uwa.edu.au]
Host or domain name Internet address
gu server = mackerel.gu.uwa.edu.au
gu server = uniwa.uwa.edu.au
gu 130.95.100.3
snuffle-upagus 130.95.100.131
mullet 130.95.100.2
mackerel 130.95.100.3
marlin 130.95.100.4
gugate 130.95.100.1
gugate 130.95.100.129
helpdesk 130.95.100.180
lan 130.95.100.0
big-bird 130.95.100.130

To get a list of all the machines in the domain.

If you wanted to find a list of all of the MX records for the domain, you can put a -m flag in the ls command.

> ls -m gu.uwa.edu.au.
[uniwa.uwa.edu.au]
Host or domain name Metric Host
gu 100 mackerel.gu.uwa.edu.au
gu 200 uniwa.uwa.edu.au

This only works for a limited selection of the different types.

DIG: dig axfr <zone> @<server>

6 Appendicies

6.2 Appendix B

An Excerpt from
RFC 1340 Assigned Numbers July 1992


MACHINE NAMES

These are the Official Machine Names as they appear in the Domain Name
System HINFO records and the NIC Host Table. Their use is described in
RFC-952 [53].

A machine name or CPU type may be up to 40 characters taken from the
set of uppercase letters, digits, and the two punctuation characters
hyphen and slash. It must start with a letter, and end with a letter
or digit.

ALTO DEC-1080
ALTOS-6800 DEC-1090
AMDAHL-V7 DEC-1090B
APOLLO DEC-1090T
ATARI-104ST DEC-2020T
ATT-3B1 DEC-2040
ATT-3B2 DEC-2040T
ATT-3B20 DEC-2050T
ATT-7300 DEC-2060
BBN-C/60 DEC-2060T
BURROUGHS-B/29 DEC-2065
BURROUGHS-B/4800 DEC-FALCON
BUTTERFLY DEC-KS10
C/30 DEC-VAX-11730
C/70 DORADO
CADLINC DPS8/70M
CADR ELXSI-6400
CDC-170 EVEREX-386
CDC-170/750 FOONLY-F2
CDC-173 FOONLY-F3
CELERITY-1200 FOONLY-F4
CLUB-386 GOULD
COMPAQ-386/20 GOULD-6050
COMTEN-3690 GOULD-6080
CP8040 GOULD-9050
CRAY-1 GOULD-9080
CRAY-X/MP H-316
CRAY-2 H-60/68
CTIWS-117 H-68
DANDELION H-68/80
DEC-10 H-89
DEC-1050 HONEYWELL-DPS-6
DEC-1077 HONEYWELL-DPS-8/70
HP3000 ONYX-Z8000
HP3000/64 PDP-11
IBM-158 PDP-11/3
IBM-360/67 PDP-11/23
IBM-370/3033 PDP-11/24
IBM-3081 PDP-11/34
IBM-3084QX PDP-11/40
IBM-3101 PDP-11/44
IBM-4331 PDP-11/45
IBM-4341 PDP-11/50
IBM-4361 PDP-11/70
IBM-4381 PDP-11/73
IBM-4956 PE-7/32
IBM-6152 PE-3205
IBM-PC PERQ
IBM-PC/AT PLEXUS-P/60
IBM-PC/RT PLI
IBM-PC/XT PLURIBUS
IBM-SERIES/1 PRIME-2350
IMAGEN PRIME-2450
IMAGEN-8/300 PRIME-2755
IMSAI PRIME-9655
INTEGRATED-SOLUTIONS PRIME-9755
INTEGRATED-SOLUTIONS-68K PRIME-9955II
INTEGRATED-SOLUTIONS-CREATOR PRIME-2250
INTEGRATED-SOLUTIONS-CREATOR-8 PRIME-2655
INTEL-386 PRIME-9955
INTEL-IPSC PRIME-9950
IS-1 PRIME-9650
IS-68010 PRIME-9750
LMI PRIME-2250
LSI-11 PRIME-750
LSI-11/2 PRIME-850
LSI-11/23 PRIME-550II
LSI-11/73 PYRAMID-90
M68000 PYRAMID-90MX
MAC-II PYRAMID-90X
MASSCOMP RIDGE
MC500 RIDGE-32
MC68000 RIDGE-32C
MICROPORT ROLM-1666
MICROVAX S1-MKIIA
MICROVAX-I SMI
MV/8000 SEQUENT-BALANCE-8000
NAS3-5 SIEMENS
NCR-COMTEN-3690 SILICON-GRAPHICS
NEXT/N1000-316 SILICON-GRAPHICS-IRIS
NOW SGI-IRIS-2400
SGI-IRIS-2500 SUN-3/50
SGI-IRIS-3010 SUN-3/60
SGI-IRIS-3020 SUN-3/75
SGI-IRIS-3030 SUN-3/80
SGI-IRIS-3110 SUN-3/110
SGI-IRIS-3115 SUN-3/140
SGI-IRIS-3120 SUN-3/150
SGI-IRIS-3130 SUN-3/160
SGI-IRIS-4D/20 SUN-3/180
SGI-IRIS-4D/20G SUN-3/200
SGI-IRIS-4D/25 SUN-3/260
SGI-IRIS-4D/25G SUN-3/280
SGI-IRIS-4D/25S SUN-3/470
SGI-IRIS-4D/50 SUN-3/480
SGI-IRIS-4D/50G SUN-4/60
SGI-IRIS-4D/50GT SUN-4/110
SGI-IRIS-4D/60 SUN-4/150
SGI-IRIS-4D/60G SUN-4/200
SGI-IRIS-4D/60T SUN-4/260
SGI-IRIS-4D/60GT SUN-4/280
SGI-IRIS-4D/70 SUN-4/330
SGI-IRIS-4D/70G SUN-4/370
SGI-IRIS-4D/70GT SUN-4/390
SGI-IRIS-4D/80GT SUN-50
SGI-IRIS-4D/80S SUN-100
SGI-IRIS-4D/120GTX SUN-120
SGI-IRIS-4D/120S SUN-130
SGI-IRIS-4D/210GTX SUN-150
SGI-IRIS-4D/210S SUN-170
SGI-IRIS-4D/220GTX SUN-386i/250
SGI-IRIS-4D/220S SUN-68000
SGI-IRIS-4D/240GTX SYMBOLICS-3600
SGI-IRIS-4D/240S SYMBOLICS-3670
SGI-IRIS-4D/280GTX SYMMETRIC-375
SGI-IRIS-4D/280S SYMULT
SGI-IRIS-CS/12 TANDEM-TXP
SGI-IRIS-4SERVER-8 TANDY-6000
SPERRY-DCP/10 TEK-6130
SUN TI-EXPLORER
SUN-2 TP-4000
SUN-2/50 TRS-80
SUN-2/100 UNIVAC-1100
SUN-2/120 UNIVAC-1100/60
SUN-2/130 UNIVAC-1100/62
SUN-2/140 UNIVAC-1100/63
SUN-2/150 UNIVAC-1100/64
SUN-2/160 UNIVAC-1100/70
SUN-2/170 UNIVAC-1160
UNKNOWN
VAX-11/725
VAX-11/730
VAX-11/750
VAX-11/780
VAX-11/785
VAX-11/790
VAX-11/8600
VAX-8600
WANG-PC002
WANG-VS100
WANG-VS400
WYSE-386
XEROX-1108
XEROX-8010
ZENITH-148

SYSTEM NAMES

These are the Official System Names as they appear in the Domain Name
System HINFO records and the NIC Host Table. Their use is described
in RFC-952 [53].

A system name may be up to 40 characters taken from the set of upper-
case letters, digits, and the three punctuation characters hyphen,
period, and slash. It must start with a letter, and end with a
letter or digit.

AEGIS LISP SUN OS 3.5
APOLLO LISPM SUN OS 4.0
AIX/370 LOCUS SWIFT
AIX-PS/2 MACOS TAC
BS-2000 MINOS TANDEM
CEDAR MOS TENEX
CGW MPE5 TOPS10
CHORUS MSDOS TOPS20
CHRYSALIS MULTICS TOS
CMOS MUSIC TP3010
CMS MUSIC/SP TRSDOS
COS MVS ULTRIX
CPIX MVS/SP UNIX
CTOS NEXUS UNIX-BSD
CTSS NMS UNIX-V1AT
DCN NONSTOP UNIX-V
DDNOS NOS-2 UNIX-V.1
DOMAIN NTOS UNIX-V.2
DOS OS/DDP UNIX-V.3
EDX OS/2 UNIX-PC
ELF OS4 UNKNOWN
EMBOS OS86 UT2D
EMMOS OSX V
EPOS PCDOS VM
FOONEX PERQ/OS VM/370
FUZZ PLI VM/CMS
GCOS PSDOS/MIT VM/SP
GPOS PRIMOS VMS
HDOS RMX/RDOS VMS/EUNICE
IMAGEN ROS VRTX
INTERCOM RSX11M WAITS
IMPRESS RTE-A WANG
INTERLISP SATOPS WIN32
IOS SCO-XENIX/386 X11R3
IRIX SCS XDE
ISI-68020 SIMP XENIX
ITS SUN


6.3 Appendix C

Appendix C Installing DNS on a Sun when running NIS

====================
2) How to get DNS to be used when running NIS ?

First setup the appropriate /etc/resolv.conf file.
Something like this should do the "trick".

;
; Data file for a client.
;
domain local domain
nameserver address of primary domain nameserver
nameserver address of secondary domain nameserver

where: "local domain" is the domain part of the hostnames.
For example, if your hostname is "thor.ece.uc.edu"
your "local domain" is "ece.uc.edu".

You will need to put a copy of this resolv.conf on
all NIS(YP) servers including slaves.

Under SunOS 4.1 and greater, change the "B=" at the top
of the /var/yp/Makefile to "B=-b" and setup NIS in the
usual fashion.

You will need reboot or restart ypserv for these changes
to take affect.

Under 4.0.x, edit the Makefile or apply the following "diff":

*** Makefile.orig Wed Jan 10 13:22:11 1990
--- Makefile Wed Jan 10 13:22:01 1990
***************
*** 63 ****
! | $(MAKEDBM) - $(YPDBDIR)/$(DOM)/hosts.byname; \
--- 63 ----
! | $(MAKEDBM) -b - $(YPDBDIR)/$(DOM)/hosts.byname; \
***************
*** 66 ****
! | $(MAKEDBM) - $(YPDBDIR)/$(DOM)/hosts.byaddr; \
--- 66 ----
! | $(MAKEDBM) -b - $(YPDBDIR)/$(DOM)/hosts.byaddr; \
====================

--
Craig Richmond. Computer Officer - Dept of Economics (morning) 380 3860
University of Western Australia Dept of Education (afternoon) 2368
craig@ecel.uwa.edu.au Dvorak Keyboards RULE! "Messes are only acceptable
if users make them. Applications aren't allowed this freedom" I.M.VI 2-4


About Me