The Linux desktop has come a long, long way, but there are still times when I have to use the command line. (I am a hardcore user, after all.) But even though I'm used to typing, spending hours upon hours with my fingers at the keyboard, I still grow tired of typing the same commands over and over. To reduce that tedium, I always add aliases to my .bashrc file
What is an alias?
An alias is basically a shortcut for a command you place in your ~/.bashrc file. Aliases cut down on typing and can save you from having to look up a command
Aliases are set up near the bottom of the of the .bashrc file. You'll see a commented-out section that indicates where you should put them. The format of an alias is:
Alias NICKNAME='full command here'
The keyword alias must be used. The nickname is what you will type at the command line. Make this nickname easy to remember. The = sign must also be used. After the = sign, you enter the full command, including flags and switches, enclosed in single quotes. Once you are done, save the .bashrc file and open up a new terminal. I always find it best to leave the original terminal window open in case there are problems. In the new terminal, type the alias nickname and the command will run.
Following list of aliases to help make a command-line experience a bit easier
1. The ssh alias
This one should be a no-brainer for those of you who frequently secure shell into particular boxes. For this I add an alias like so:
alias server_name='ssh -v -l USERNAME IP ADDRESS'
Just change server_name to a memorable name for the server. Then, change USERNAME and IP ADDRESS to suit your needs.
2. The ls aliases
Some distributions don't include some of the handier ls commands. Generally, I like to see full listings instead of just filenames. For that I always include this alias:
alias ll='ls -l'
Another handy ls alias is this:
alias la='ls -a'
3. The rm safety net
I can't tell you how many times I have "rm'd" a file I shouldn't have "rm'd". To avoid this, I add this alias:
alias rm='rm -i'
Adding the '-i' flag it forces rm into interactive mode, which asks whether you're sure you want to remove a file
4. A more useful df command
This handy tool tells you how much space you have left on a drive. Only thing is, if you run the command by itself it replies in 1K blocks. Most people would prefer to see this in terms of MB. To make that happen, add this alias:
alias df='df -h'
Now, every time you run the df command, the information will be returned in a human-readable format
5. The nonstandard Firefox
Many times, I install Firefox in strange directories (or have more than one version of Firefox installed for testing purposes). For this, I will add an alias to start the correct Firefox. Say, for example, I have the beta of the newest, upcoming Firefox release installed, as well as the current stable Firefox. They are both installed in my home directory in different subdirectories. I will then add two aliases like so:
alias ff1='/home/jlwallen/firefox/firefox'
alias ff2='/home/jlwallen/firefoxb3/firefox'
Now I can start the stable firefox with ff1 or the beta with ff2
6. The bookmark alias
Speaking of Firefox, let's create an alias to open up it to a specific URL:
alias ffg='/home/jlwallen/firefox/firefox http://www.google.com'
This alias will open Firefox directly to the Google site
7. The constant editing of a file
There are certain files that I am constantly editing. For instance, when I used Enlightenment E16 (I now use E17), I was frequently editing the menu file ~/e16/menus/user_apps. Instead of constantly opening up a terminal and entering nano ~/.e16/menus/user_apps, I used an alias that allowed me to type emenu and start editing. I used this alias:
alias emenu='aterm nano -e ~/.e16/menus/user_apps'
Now, I just enter the command emenu (or I can enter that in the run command dialog) to open up this file in an editor
8. The apt-get update
There are numerous ways to use an alias to help you with apt-get. One of my favorite is to add this alias:
alias update='sudo apt-get update'
I only need to enter update and will be prompted for the sudo password. You can modify this to suit your frequent apt-get needs
9. The rpm batch install
I like to do a lot of batch installing with rpm. I will typically dump a bunch of rpm files into an empty directory (created for this specific purpose) and run the command rpm -ivh ~/RPM/*rpm. Of course, an alias makes this even easier:
alias brpm='rpm -ivh ~/RPM/*rpm'
You have to create the ~/RPM directory and enter the root password for this to work
10. The long, arduous path
There are some paths that I often change to that seem to take eons to type. When I was working on the Afterstep window manager, I had to constantly change to the ~/GNUstep/Library/AfterStep/start to edit menus. After a while, you get tired of typing cd ~/GNUstep/Library/AfterStep/start just to get to the directory. So I added an alias like so:
alias astart='cd ~/GNUstep/Library/AfterStep/start'
Naturally, you can change that to fit your needs. This will save you a lot of typing
So there you have it: a few simple bash aliases that will ease the load on your fingers. You can modify them to suit you, and they'll give you a good start on creating your own handy bash aliases
Monday, June 2, 2008
Cut down on Linux command-line typing with these 10 handy bash aliases
Posted by Paritosh at 2:48 PM 17 comments
Friday, May 30, 2008
10 ways to secure your Linux desktop
A Linux desktop is far more secure than most others. But this level of security doesn't necessarily involve typical security-focused software or techniques. Sometimes, the easiest means to security are those measures that are the easiest to forget. Let's take a look at 10 things you can do to secure a Linux desktop.
Note that we're talking about the desktop, not a server. Linux server security is another beast all together -- one that would confuse the average desktop user.
1. Locking the screen and logging out is important
Most people forget that the Linux desktop is a multi-user environment. Because of this, you can log out of your desktop and others can log in. Not only does that mean that others could be using your desktop, it also means you can (and should) log out when you're finished working. Of course, logging out is not your only option. If you are the only user on your system, you can lock your screen instead. Locking your screen simply means that a password will be required to get back into the desktop. The difference here is that you can leave applications running and lock the desktop. When you unlock the desktop, those same programs will still be running. Safe and secure.
2. Hiding files and folders is a quick fix
In Linux-land, files and folders are hidden by adding a "." before the name. So the file test will appear in a file browser, whereas .test will not. Most people don't know that running the command ls -a will show hidden files and folders. So if you have folders or files you don't want your co-workers to see, simply add the dot to the beginning of the file or folder name. You can do this from the command line like so: mv test .test
3. A good password is a must
Your password on a Linux PC is your golden key. If you give that password out, or if you use a weak password, your golden key could become everyone's golden key. And if you're using a distribution like Ubuntu, that password will give users much more access than, say, on Fedora. To that end, make sure your password is strong. There are many password generators you can use such as Automated Password Generator
4. Installing file-sharing applications is a slippery slope
I know many Linux users are prone to file sharing. If you want to run that risk at home, that's your call. But when at work, you not only open yourself (or your company) up to lawsuits, you open your desktop machine up to other users who might have access to sensitive data on your work PC. So as a rule, do not install file-sharing tools.
5. Updating your machine regularly is a smart thing
Linux isn't Windows. With Windows, you get security updates when Microsoft releases them (which could be many months away). With Linux, a security update can come minutes or hours after the security flaw is detected. With both KDE and GNOME, there are update applets for the Panel. I always recommend having them up and running so you know when updates are made available. Don't put off security updates. There is a reason they come out.
6. Installing virus protection is actually useful in Linux
Believe it or not, virus protection in Linux has its place. Of course, the chances of a virus causing problems on YOUR Linux machine are slim to none. But those e-mails you forward to others' Windows machines could cause problems. With a good virus protection, like ClamAV, you can ensure that e-mail going out of your machine doesn't contain anything nasty that could come back to haunt you (or your company)
7. SELinux is there for a reason
SELinux (Security-Enhanced Linux) was created by NSA. What SELinux does is help lock down access control to applications. And it does it very well. Sure, SELinux can sometimes be a pain. In some cases, it might take a hit out of your system performance. Or you might find some applications a struggle to install. But the security comfort you gain using SELinux (or Apparmor) far outweighs the negatives. During the Fedora installation, you get the chance to enable SELinux
8. Creating /home in a separate partition is safer
The default Linux installation places your /home directory right in the root of your system. Sure, this is fine, but
To solve this problem, you can place /home on a different hard drive or partition all together (making it a partition in and of itself). This is not a task for the weak of heart, but it is one worth employing if you're uber-concerned about your data
9. Using a nonstandard desktop is worth its weight in gold
Not only do the alternative desktops (Enlightenment, Blackbox, Fluxbox, etc.) give you a whole new look and feel for your PC, they offer simple security from prying eyes you may never have thought of. I have deployed Fluxbox on kiosk machines when I wanted a machine that could do one thing: Browse the network. How do you do that? Simple. Create a single mouse menu (or desktop icon) for the application you want to use. Unless the user knows how to get back to the command line (by logging out or hitting Ctrl-Alt-F*, where * is a desktop other than the one you are using), they will not be able to start up any application other than the one offered. Since most users have no idea how to move around in these desktops anyway, they aren't going to have the slightest idea how to get to your files. Simple pseudo-security
10 Stopping services is best
This is a desktop machine. It's not a server. So why are you running services like httpd, ftpd, and sshd? You shouldn't need them and they only pose a security risk (unless you know how to lock them down.) So don't run them. Check your /etc/inetd.conf file and make sure that all unnecessary services are commented out
You might find these suggestions to be pure common sense -- but maybe you'll see a means of security you never thought of before. And if you're a new Linux user, these tips are a great place to start to ensure that your Linux experience is a good one.
Posted by Paritosh at 4:52 PM 18 comments
Wednesday, March 12, 2008
How Hackers Breach Security
Hacking, cracking, and cyber crimes are hot topics these days and will continue to be for the foreseeable future. However, there are steps you can take to reduce your organization's threat level. The first step is to understand what risks, threats, and vulnerabilities currently exist in your environment. The second step is to learn as much as possible about the problems so you can formulate a solid response. The third step is to intelligently deploy your selected countermeasures and safeguards to erect protections around your most mission-critical assets. This white paper discusses ten common methods hackers use to breach your existing security.
Stealing Passwords
Security experts have been discussing the problems with password security for years. But it seems that few have listened and taken action to resolve those problems. If your IT environment controls authentication using passwords only, it is at greater risk for intrusion and hacking attacks than those that use some form of multifactor authentication.
The problem lies with the ever-increasing abilities of computers to process larger amounts of data in a smaller amount of time. A password is just a string of characters, typically only keyboard characters, which a person must remember and type into a computer terminal when required. Unfortunately, passwords that are too complex for a person to remember easily can be discovered by a cracking tool in a frighteningly short period of time. Dictionary attacks, brute force attacks, and hybrid attacks are all various methods used to guess or crack passwords. The only real protection against such threats is to make very long passwords or use multiple factors for authentication. Unfortunately, requiring ever longer passwords causes a reversing of security due to the human factor. People simply are not equipped to remember numerous long strings of chaotic characters.
But even with reasonably long passwords that people can remember, such as 12 to 16 characters, there are still other problems facing password-only authentication systems. These include:
Password theft, password cracking, and even password guessing are still serious threats to IT environments. The best protection against these threats is to deploy multifactor authentication systems and to train personnel regarding safe password habits.
Trojan Horses
A Trojan horse is a continuing threat to all forms of IT communication. Basically, a Trojan horse is a malicious payload surreptitiously delivered inside a benign host. You are sure to have heard of some of the famous Trojan horse malicious payloads such as Back Orifice, NetBus, and SubSeven. But the real threat of Trojan horses is not the malicious payloads you know about, its ones you don't. A Trojan horse can be built or crafted by anyone with basic computer skills. Any malicious payload can be combined with any benign software to create a Trojan horse. There are countless ways of crafting and authoring tools designed to do just that. Thus, the real threat of Trojan horse attack is the unknown.
The malicious payload of a Trojan horse can be anything. This includes programs that destroy hard drives, corrupt files, record keystrokes, monitor network traffic, track Web usage, duplicate e-mails, allow remote control and remote access, transmit data files to others, launch attacks against other targets, plant proxy servers, host file sharing services, and more. Payloads can be grabbed off the Internet or can be just written code authored by the hacker. Then, this payload can be embedded into any benign software to create the Trojan horse. Common hosts include games, screensavers, greeting card systems, admin utilities, archive formats, and even documents.
All a Trojan horse attack needs to be successful is a single user to execute the host program. Once that is accomplished, the malicious payload is automatically launched as well, usually without any symptoms of unwanted activity. A Trojan horse could be delivered via e-mail as an attachment, it could be presented on a Web site as a download, or it could be placed on a removable media (memory card, CD/DVD, USB stick, floppy, etc.). In any case, your protections are automated malicious code detection tools, such as modern anti-virus protections and other specific forms of malware scanners, and user education.
Exploiting Defaults
Nothing makes attacking a target network easier than when that target is using the defaults set by the vendor or manufacturer. Many attack tools and exploit scripts assume that the target is configured using the default settings. Thus, one of the most effective and often overlooked security precautions is simply to change the defaults.
To see the scope of this problem, all you need to do is search the Internet for sites using the keywords "default passwords". There are numerous sites that catalog all of the default user names, passwords, access codes, settings, and naming conventions of every software and hardware IT product ever sold. It is your responsibility to know about the defaults of the products you deploy and make every effort to change those defaults to nonobvious alternatives.
But it is not just account and password defaults you need to be concerned with, there are also the installation defaults such as path names, folder names, components, services, configurations, and settings. Each and every possible customizable option should be considered for customization. Try to avoid installing operating systems into the default drives and folders set by the vendor. Don't install applications and other software into their "standard" locations. Don't accept the folder names offered by the installation scripts or wizards. The more you can customize your installations, configurations, and settings, the more your system will be incompatible with attack tools and exploitation scripts.
Man-in-the-Middle Attacks
Every single person reading this white paper has been a target of numerous man-in-the-middle attacks. A MITM attack occurs when an attacker is able to fool a user into establishing a communication link with a server or service through a rogue entity. The rogue entity is the system controlled by the hacker. It has been set up to intercept the communication between user and server without letting the user become aware that the misdirection attack has taken place. A MITM attack works by somehow fooling the user, their computer, or some part of the user's network into re-directing legitimate traffic to the illegitimate rogue system.
A MITM attack can be as simple as a phishing e-mail attack where a legitimate looking e-mail is sent to a user with a URL link pointed towards the rogue system instead of the real site. The rogue system has a look-alike interface that tricks the user into providing their logon credentials. The logon credentials are then duplicated and sent on to the real server. This action opens a link with the real server, allowing the user to interact with their resources without the knowledge that their communications have taken a detour through a malicious system that is eavesdropping on and possibly altering the traffic.
MITM attacks can also be waged using more complicated methods, including MAC (Media Access Control) duplication, ARP (Address Resolution Protocol) poisoning, router table poisoning, fake routing tables, DNS (Domain Name Server) query poisoning, DNS hijacking, rogue DNS servers, HOSTS file alteration, local DNS cache poisoning, and proxy re-routing. And that doesn't mention URL obfuscation, encoding, or manipulation that is often used to hide the link misdirection.
To protect yourself against MITM attacks, you need to avoid clicking on links found in e-mails. Furthermore, always verify that links from Web sites stay within trusted domains or still maintain SSL encryption. Also, deploy IDS (Intrusion Detection System) systems to monitor network traffic as well as DNS and local system alterations
Wireless Attacks
Wireless networks have the appeal of freedom from wires - the ability to be mobile within your office while maintaining network connectivity. Wireless networks are inexpensive to deploy and easy to install. Unfortunately, the true cost of wireless networking is not apparent until security is considered. It is often the case that the time, effort, and expense required to secure wireless networks is significantly more than deploying a traditional wired network.
Interference, DOS, hijacking, man-in-the-middle, eavesdropping, sniffing, and many more attacks are made simple for attackers when wireless networks are present. That doesn't even mention the issue that a secured wireless network (802.11a or 802.11g) will typically support under 14 Mbps of throughput, and then only under the most ideal transmission distances and conditions. Compare that with the standard of a minimum of 100 Mbps for a wired network, and the economy just doesn't make sense.
However, even if your organization does not officially sanction and deploy a wireless network, you may still have wireless network vulnerabilities. Many organizations have discovered that workers have taken it upon themselves to secretly deploy their own wireless network. They can do this by bringing in their own wireless access point (WAP), plugging in their desktop's network cable into the WAP, then re-connecting their desktop to one of the router/switch ports of the WAP. This retains their desktop's connection to the network, plus it adds wireless connectivity. All too often when an unapproved WAP is deployed, it is done with little or no security enabled on the WAP. Thus, a $50 WAP can easily open up a giant security hole in a multi-million dollar secured-wired network.
To combat unapproved wireless access points, a regular site survey needs to be performed. This can be done with a notebook using a wireless detector such as NetStumbler or with a dedicated hand-held device.
Doing their Homework
I don't mean that hackers break into your network by getting their school work done, but you might be surprised how much they learn from school about how to compromise security. Hackers, especially external hackers, learn how to overcome your security barriers by researching your organization. This process can be called reconnaissance, discovery, or footprinting. Ultimately, it is intensive, focused research into all information available about your organization from public and non-so-public resources.
If you've done any research or reading into warfare tactics, you are aware that the most important weapon you can have at your disposal is information. Hackers know this and spend considerable time and effort acquiring a complete arsenal. What is often disconcerting is how much your organization freely contributes to the hacker's weapon stockpile. Most organizations are hemorrhaging data; companies freely give away too much information that can be used against them in various types of logical and physical attacks. Here are just a few common examples of what a hacker can learn about your organization, often in minutes:
As you can see, there is no end to the information that a hacker can obtain from public open sources. This list of examples is only a beginning. Each kernel of truth discovered often leads the hacker to unearth more. Often, a hacker will spend over 90% of their time in information-gathering activities. The more the attacker learns about the target, the easier the subsequent attack becomes.
As for defense, you are ultimately at a loss—mainly because it is already too late. Once information is out on the Internet, it is always out there. You can obviously clean up and sterilize any information resource currently under your direct control. You can even contact third-party information repositories to request that they change your information. Some online data systems, such as domain registrars, offer privacy and security services (for a fee, of course). You can also control or limit the output of information in the future by being more discrete in your announcements, product details, press releases, etc.
However, it is the information that you can't change or remove from the Internet that will continue to erode your security. The only way to manage uncontrollable information is to alter your environment so that it is no longer correct or relevant. Think of this as a new way to deviate from defaults or at least deviate from the previous known.
Monitoring Vulnerability Research
Hackers have access to the same vulnerability research that you do. They are able to read Web sites, discussion lists, blogs, and other public information services about known problems, issues, and vulnerabilities with hardware and software. The more the hacker can discover about possible attack points, the more likely it is that he can discover a weakness you've yet to patch, protect, or even become aware of.
To combat vulnerability research on the part of the hacker, you have to be just as vigilant as the hacker. You have to be looking for the problems in order to protect against them just as intently as the hacker is looking for problems to exploit. This means keeping watch on discussion groups and web sites from each and every vendor whose products your organization utilizes. Plus, you need to watch the third-party security oversight discussion groups and web sites to learn about issues that vendors are failing to make public or that don't yet have easy solutions. These include places like securityfocus.com, US CERT, hackerstorm.com, and hackerwatch.org
Being Patient and Persistent
Hacking into a company network is not typically an activity someone undertakes and completes in a short period of time. Hackers often research their targets for weeks or months, before starting their first tentative logical interactions against their target with scanners, banner-grabbing tools, and crawling utilities. And even then, their initial activities are mostly subtle probing to verify the data they gathered through their intensive "offline" research. Once hackers have crafted a profile of your organization, they must then select a specific attack point, design the attack, test and drill the attack, improve the attack, schedule the attack, and, finally, launch the attack.
In most cases, a hacker's goal is not to bang on your network so that you become aware of their attacks. Instead, a hacker's goal is to gain entry subtly so that you are unaware that a breach has actually taken place. The most devastating attacks are those that go undetected for extended periods of time, while the hacker has extensive control over the environment. An invasion can remain undetected nearly indefinitely if it is executed by a hacker who is patient and persistent. Hacking is often most successful when performed one small step at a time and with significant periods of time between each step attempt - at least up to the point of a successful breach. Once hackers have gained entry, they quickly deposit tools to hide their presence and grant them greater degrees of control over your environment. Once these hacker tools are planted, hidden, and made active, the hackers are free to come and go as they please.
Likewise, protecting against a hacker intrusion is also about patients and persistence. You must be able to watch even the most minor activities on your network with standard auditing processes as well as an auto-mated IDS/IPS system. Never allow any anomaly to go uninvestigated. Use common sense, follow the best business practices recommended by security professionals, and keep current on patches, updates, and system improvements.
However, realize that security is not a goal that can be fully obtained. There is no perfectly secure environment. Every security mechanism can be fooled, overcome, disabled, bypassed, exploited, or made worthless. Hacking successfully often means the hacker is more persistent than the security professional protecting an environment. Ultimately, it is an arms race to see who blinks or falls behind first. With enough time, the right tools, sufficient expertise and skill, mounting information collection, and persistence, a hacker can and will find a way to breach any and every security system.
Confidence Games
The good news about hacking today is that many security mechanisms are very effective against most hacking attempts. Firewalls, IDSes, IPSes, and anti-malware scanners have made intrusions and hacking a difficult task. However, the bad news is many hackers have expanded their idea of what hacking means to include social engineering: hackers are going after the weakest link in any organization's security—the people.
People are always the biggest problem with security because they are the only element within the secured environment that has the ability to choose to violate the rules. People can be coerced, tricked, duped, or forced into violating some aspect of the security system in order to grant a hacker access. The age-old problem of people exploiting other people by taking advantage of human nature has returned as a means to bypass modern security technology.
Protection against social engineering is primarily education. Training personnel about what to look for and to report all abnormal or awkward interactions can be effective countermeasures. But this is only true if everyone in the organization realizes that they are a social engineering target. In fact, the more a person believes that their position in the company is so minor that they would not be a worthwhile target, the more they are actually the preferred targets of the hacker.
Already Being on the Inside
All too often when hacking is discussed, it is assumed that the hacker is some unknown outsider. However, studies have shown that a majority of security violations actually are caused by internal employees. So, one of the most effective ways for a hacker to breach security is to be an employee. This can be read in two different ways. First, the hacker can get a job at the target company and then exploit that access once they gain the trust of the organization. Second, an existing employee can become disgruntled and choose to cause harm to the company as a form of revenge or retribution.
In either case, when someone on the inside decides to attack the company network, many of the security defenses erected against outside hacking and intrusion are often ineffective. Instead, internal defenses specific to managing internal threats need to be deployed. This could include keystroke monitoring, tighter enforcement of the principle of least privilege, preventing users from installing software, not allowing any external removable media source, disabling all USB ports, extensive auditing, host-based IDS/IPS, and Internet filtering and monitoring.
There are many possible ways that a hacker can gain access to a seemingly secured environment. It is the responsibility of everyone within an organization to support security efforts and to watch for abnormal events. We need to secure IT environments to the best of our abilities and budgets while watching for the inevitable breach attempt. In this continuing arms race, vigilance is required, persistence is necessary, and knowledge is invaluable.
Posted by Paritosh at 11:43 AM 17 comments
Friday, March 7, 2008
Windows Vista: Is it secure enough for business?
Microsoft’s latest desktop operating system, Windows Vista, contains a wide range of new features, from the user interface to the heart of the operating system. However, it is the new security-related technologies which were given top priority by Microsoft in response to the many criticisms of the vulnerabilities in Vista’s forerunner, Windows XP. Developments include improved monitoring and reporting on security status, minimized opportunity for attack and improved defense against spyware. There is also a new mechanism to prevent rogue code from being able to make malicious changes to the operating system kernel, and improved browser and firewall functionality.
Windows Security Center
Windows Security Center (WSC) runs in the background, monitoring and reporting on the security status of a computer. First introduced by Microsoft in Windows XP Service Pack 2, the enhanced version in Vista provides greater integration both with other Vista security features and with third-party security solutions.
As with Windows XP, WSC monitors the internet firewall and checks the status of automatic updates and anti-virus software but it has been extended in Vista to include monitoring of anti-spyware applications. Monitoring of the security settings in Internet Explorer 7 and of the new User Account Control function (see below) has also been added.
Part of the reasoning behind the enhancements to WSC is to raise end-user awareness of security issues by alerting them to any problems. While this clearly has home-user benefits, businesses and other organizations like education and government institutions will find this both insufficient and annoying and so might well choose to disable these
end-user alerts.
In addition, some security vendors have reacted negatively to the fact that WSC cannot be automatically disabled when their alternative security solutions are installed, although Sophos cannot see why any vendor should object to a built-in security center reporting on the status of its software.
User Account Control
User Account Control (UAC) is one of the most important security features in Windows Vista. Its objective is to minimize the opportunity for attack, preventing the installation of today’s malware threats, in a scenario where end users are given local administrator rights. As with Windows XP, end users are given administrator rights by default. However, instead of invoking administrator status in a blanket fashion across all applications, the Vista login generates two security tokens: StandardUser and Administrator.
By default, Vista assigns the StandardUser token to applications, so applications that do not require administrator rights will run with no user intervention. However, many applications require administrator privileges and in this case the Administrator token is invoked and the user is asked to cancel or allow the program as appropriate, as shown in the figure.
From a security point of view UAC is a significant step forward and the principle of the least required privilege is theoretically a good one as, by default, registry and file system access are restricted. This means that malware is prevented from automatically copying itself to locations such as the Windows system folder and cannot be written to registry keys in order to be automatically launched by the operating system. The principle of the StandardUser token also prevents malicious applications from writing to the memory space of other processes, a technique commonly used by malware to bypass personal or client firewalls.
Unfortunately UAC is not just secure but intrusive, with a high level of alerts, many of which are not intuitive for non-technical users. The danger is that they will automatically select “Allow” when prompted, without fully considering whether they should. The other danger is that UAC can be disabled – and indeed many beta testers chose to do this – which removes the improved security.
Windows Defender
Windows Defender is a free anti-spyware program built into Windows Vista that will detect and remove some adware, spyware and other unwanted programs. The software uses automatic updates provided by Microsoft analysts to help detect and remove new threats as they are identified. This protection does not offer comprehensive antimalware protection, in spite of the fact that the information in WSC implies that it does.
Windows Defender only supports Windows XP Service Pack 2 or later, or Windows Server 2003 Service Pack 1 or later. It does not support other operating systems including Windows 95/98/Me and 2000. And because it is targeted at the consumer market it does not offer any central administration capabilities. So it offers little to multi-platform, centrally managed enterprise networks.
Kernel protection
Two new mechanisms have been introduced to protect the operating system kernel – Kernel Patch Protection (KPP), or PatchGuard, and mandatory signing of drivers.
KPP has been implemented in 64-bit Vista to prevent a particular type of malicious activity that manipulates the operating system kernel, causing serious security breaches and adversely impacting the stability, reliability and performance of the operating system and user applications. Commonly known as “rootkits” this type of malware is often used to hide other potentially unwanted software, such as bots and spyware. KPP prevents kernel mode drivers from extending or replacing operating system services and should therefore stop rogue drivers from making malicious changes to the kernel.
KPP has not been added to 32-bit Vista since many programs (including security software) use the kernel space in an undocumented way and Microsoft was concerned about compatibility with the existing application set. This means that 32-bit systems remain vulnerable to rootkit attack. However, the second kernel protection mechanism – mandatory signing of drivers – has been implemented in both 32-bit and 64-bit Vista and can be set to prevent unsigned drivers from loading.
Some security vendors have complained that they are being “locked out” of the Vista operating system kernel by KPP. This is because they need to be able to make changes inside Microsoft’s kernel in order to ensure their existing products can support 64-bit versions.
While it is true that there will now be some dependency on Microsoft to deliver kernel interfaces which could slow all security vendors down, this is more than compensated for by the additional security offered by a locked down kernel. Windows Vista with KPP is a step in the right direction for customers – although, since this is a software mechanism it is quite likely that it will be circumvented by malware writers sooner or later – and security vendors should embrace and work with it rather than fight it.
Internet Explorer 7
Windows Vista’s built-in web browser, Internet Explorer 7 (IE7), includes security enhancements designed to protect users from phishing and spoofing attacks. In protected mode it helps prevent data and configuration settings from being deleted or changed by malicious websites or malware.The feature is enforced by a new mechanism, called Mandatory Integrity Control, whereby every process has an integrity level assigned and each level limits access to system objects (registry, file system, other processes. etc).
The new IE7 protected mode actually runs IE with the integrity level “Low” – which is lower than the default for most user processes. This happens for all security zones except the trusted zone. Downloaded programs inherit the low integrity level which should prevent malicious programs and PUAs from infecting the system and integrating with the browser.
IE7 also has a phishing filter, which helps users browse more safely by advising them when websites might be attempting to steal their confidential information. The filter works by analyzing website content, looking for known characteristics of phishing techniques and using a global network of data sources to decide if the website should be trusted.
Windows Firewall
Windows Vista includes a new firewall that goes beyond the Windows XP Service Pack 2 firewall. Application-aware outbound filtering has been added as have location-based profiles, which allow users to set up different rules based on the network location.
However, the default policy is still to allow all outgoing traffic and the default settings will not provide any additional protection over the firewall in XP SP2.
In addition, although some management is available through Group Policy, the central management function does not provide enterprise administrators with the visibility, monitoring, policy configuration and rapid response capability that enterprise-level security management consoles deliver.
Other security features
Windows Vista also includes improved Wi-Fi security, readiness for multi-factor authentication, BitLocker data protection, a Network Access Protection client, and improved auditing for compliance.
In Windows Vista, wireless networking is more secure by default, and includes support for the latest and most secure wireless networking protocol, Wi-Fi Protected Access 2 (WPA2).
Windows Vista comes with an API to make it easier to add smart card and other systems such as biometrics to Windows authentication, to make it harder for hackers to gain access to computers and data through password cracking or social engineering techniques.
Enhanced encryption enables organizations to protect against theft or loss of corporate intellectual property. Windows Vista has improved support for data protection at the document, file, directory, and machine level, including the ability to define which employees have access to certain data. Encryption keys can now be stored on smart cards. The BitLocker disk encryption system provides some protection against hacking attacks that involve booting from removable disks.
The Network Access Protection (NAP) client can be used to prevent rogue or unprotected computers gaining full access to a network, although it will only really be implementable once the necessary server components are released with the next release of Windows Server, codenamed Longhorn, expected to be released soon.
Posted by Paritosh at 12:46 PM 16 comments
Wednesday, March 5, 2008
How Does Ping Really Work?
Introduction
Ping is a basic Internet program that most of us use daily, but did you ever stop to wonder how it really worked? I don’t know about you, but it bugs me when I do not know how something really works. The purpose of this paper is to resolve any lingering questions you may have about ping and to take your understanding to the next level. If you do not happen to be a programmer, please do not be frightened off! I am not going to tell you how to write your own version of ping; trust me.
I am guessing that you know basically how the TCP/IP ping utility works. It sends an ICMP (Internet Control Message Protocol) Echo Request to a specified interface on the network and, in response, it expects to receive an ICMP Echo Reply. By doing this, the program can test connectivity, gauge response time, and report a variety of errors.
ICMP is a software component of the Internetworking layer of TCP/IP; essentially, it is a companion at that level to IP (Internet Protocol) itself. In fact, ICMP relies on IP for transport across the network. If you observe this sort of network traffic, say on an Ethernet network, then your protocol analyzer would capture an Ethernet frame transporting an IP datagram with an ICMP message inside.
Enter the problem: Since the ping program executes at the Application layer, how does it make ICMP do these tricks? You may recall, if you are a student of TCP/IP, that the Host-to-Host layer is sandwiched between these entities. Is that bypassed? If so, then how? Who is responsible for formatting these messages (Echo Request and Echo Reply)?
More vexingly, when unexpected ICMP responses, other than the customary Echo Reply, result from the Echo Request, how is it that they find their way to the ping program? This last question may seem obvious, but it is not. ICMP messages contain no addressing information that allows the TCP/IP protocol stack to discern the program that is to receive the message. TCP and UDP use port numbers for this purpose. So, how does this work?
Background
The TCP/IP protocol stack is organized as a four-layer model (see Figure). The lowest layer, commonly called the Network Interface or Network Access layer, is analogous to OSI layers 1 and 2, the Physical and Data Link Control layers. This includes things like media, connectors, signaling, physical addressing, error detection, and managing shared access to the media. For most of us this translates into Ethernet and our cabling system.The layer above the Network Access layer, the Internetworking layer, is best likened to OSI layer 3, the Network layer. Here we expect to find logical addressing and routing: things that facilitate communication across network boundaries. This is where IP and its addressing mechanisms reside, as does ICMP.
ICMP is a necessary component of any TCP/IP implementation. It does not exist to provide information to the higher-layer protocols (like TCP and UDP) so that they may be more reliable. Rather, ICMP provides network diagnostic capabilities and feedback to those responsible for network administration and operation. See RFC 792, if you are really interested.
Above the Internetworking layer is the Host-to-Host layer, which is the counterpart of OSI layer 4, the Transport layer. I like to think that this also includes some of the Session layer (5) functionality as well. This is where we expect to find facilities for reliable end-to-end data exchange, additional error checking, and the means to discriminate one program from another (using port numbers). TCP and UDP reside at this level.
At the top of the stack, the Application or Process layer, we find high-level protocols (like SMTP, HTTP, and FTP) implemented. This is where applications execute as well. So when you do a ping, the ping program should be perceived to function at this level.
A Minor Mystery
With ICMP operating at the Internetworking layer and the ping program at the Application layer, how is the Host-to-Host layer bypassed? The answer lies in an understanding of what are known as “raw” sockets.
Well, for openers, what is a socket, right? Abstractly, a socket is an endpoint to communication, usually thought to consist of an IP address and port number, which identify a particular host and program, respectively. But a programmer has a slightly different perspective on a socket. From his vantage point, “socket” is a system function that allocates resources that enable the program to interact with the TCP/IP protocol stack beneath. The addressing information is associated with this only after the socket call is made. (Again, if you are interested, this is the role of the “bind” function.) So, take note, it is possible to allocate a socket and not overtly associate any addressing information with it.
There are three commonly encountered types of sockets: stream, datagram, and raw. TCP uses the stream type and UDP uses the datagram type. Raw sockets are used by any application that needs to interact directly with IP, bypassing TCP and UDP in doing so. Customers include routing protocol implementations like routed and gated (that implement RIP and OSPF). It also includes our friend ping.
There are some special considerations in using raw sockets। Since you are circumventing the facilities of the Host-to-Host layer, you forego the program addressing mechanism, the port numbering scheme. This means that programs that employ raw sockets must sift through all incoming packets presented to them in order to find those packets that are of interest.
What Actually Goes On
When the ping program begins execution, it opens a raw socket sensitive only to ICMP. This means two things:
Let us take these things in turn.
On the outbound side, the Echo Requests are formatted in the manner shown in Figure. The message type is always the coded value eight (8). The code field always contains zero. The checksum is used for error detection. The ICMP message header and data are included in its computation. The ping program performs this calculation and fills in the blank. The identification field follows and is supposed to contain the process ID (PID) that uniquely identifies that execution of the ping program to the operating system. On Windows systems, this field contains the constant value 256. Next is the sequence number field, which starts at 0 and is bumped by one on each Echo Request sent. After these required fields, optional test data will follow. In the ping implementation that I examined (Slackware Linux), this included a timestamp used in the round-trip time calculation upon receipt of the Echo Reply.As for inbound ICMP messages, ping’s task is a bit more complex. Because ping is using a raw ICMP socket, the program is presented with a copy of all incoming ICMP messages, except for a few special cases like incoming Echo Requests generated by other people pinging us (the latter are handled by the system). This means that ping sees not only the expected Echo Replies when they arrive but also things like Destination Unreachable, Source Quench, and Time Exceeded messages. (Figure summarizes the ICMP message types.)
Now think about this for a moment. If you have two copies of the ping program running at the same time, then they are each going to see one another’s Echo Replies and any other “nastygrams” that might show up. Each instance of the program must identify the messages that are relevant to it. If you guessed that this is what the PID (identification) field is used for then you are absolutely right.
How does the Windows flavor of ping accomplish this feat without the PID? You got me. That sounds like a topic for a future article. Let me get back to you on that.
Interestingly, the messages coming in are handed to ping with the IP header still intact. So, the program has access to important things there like the time-to-live (TTL) value and record route information (if the latter option is turned on).
Summary
At this point, you should have a fairly complete understanding of the cycle of processing associated with ping. Let me recapitulate the essential elements:
Posted by Paritosh at 1:25 PM 148 comments
Labels: Guides
Thursday, January 10, 2008
10 security blunders
While one of the following links is actually from early 2008, they all refer to issues that arose during the year of 2007.
- The UK privacy breach: An employee of Her Majesty’s Revenue and Customs Office mailed two CDs containing confidential data on about 25 million UK citizens, including names, addresses, insurance account numbers, and bank account details for claimants in the national child benefit database. These CDs never made it to their destination. Just in case you think someone having your bank account number is no big deal, you should read about what happened to Top Gear TV series host Jeremy Clarkson when he published his account information in a newspaper to “prove” that having someone’s bank account will do nothing for a malicious party. At least Clarkson owned up to the mistake and started advocating disincentives for such poor security practice. I particularly like when he said “we must go after the idiots who lost the discs and stick cocktail sticks in their eyes until they beg for mercy.”
- Embassies confuse anonymity with security: Swedish security consultant Dan Egerstad showed that people all over the world, most notably certain embassies, tend to assume that using the Tor anonymizing network means they’re secure. Somehow, they’ve missed the importance of encryption to protect their data. One must wonder why governments are so bad at security. By the way, the Swedish equivalents to the FBI and CIA raided Egerstad’s apartment for undisclosed reasons, accused him of several crimes, then released him without charges.
- The iPhone runs everything as root: As Wired put it, IPhone’s Security Rivals Windows 95. This is very bad — and, of course, the root password for the iPhone was cracked in just three days. It had to happen eventually. To be fair, Windows Mobile devices all run everything as the administrative user as well, but this is not exactly unexpected (so it’s less notable). Credit to the fine folks at Metasploit for figuring it out, and figuring out how to make use of that fact.
- Sears installs spyware on customer computers: The depth and breadth of harvested data is truly frightening, and you just have to read it to believe it. Do not join the “My SHC Community”. Worse yet, if you follow the update link at the beginning of the article, you’ll find out that Sears (KMart is involved, too) is playing some pretty sketchy games with privacy policy presentation, based on whether the spyware is installed on your system. Considering this example, that’s probably reason enough to avoid ever getting mixed up in any online Sears community, but that’s not all. . . .
- Your Sears buying habits may be public knowledge: In short, by joining the Sears “Manage My Home” community, you can search through the Sears purchase history of anyone whose name and address you know. Not only should you avoid joining online Sears communities but, it seems, you should avoid shopping there as well. Apparently, major corporations are as bad as government agencies when it comes to security — especially Sears.
Old News
What follows is a list of older news items, from before 2007, that are still interesting and worth knowing about.
- Switching from Unix to MS Windows proves disastrous for air traffic control: A problem with a Microsoft Windows 2000 solution used to replace Unix air traffic control servers required regular restarts — and when the restart was overlooked once, it endangered 800 commercial aircraft in 2004.
- MS Windows crash cripples UK government agency: Only a couple months after the air traffic control debacle, almost the entire UK Department of Work and Pensions network crashed. This event was called the biggest crash in public sector history.
- The Pentagon improperly redacted text in a declassified document: Text was masked in a PDF by painting black lines over it, as if a physical, hardcopy, paper document had a black marker run over the relevant sections of text. Of course, doing that with Adobe Acrobat tends to leave all the text intact and recoverable, as such black “painting” occurs on a separate document layer. A Greek medical student at Bologna University recovered the obscured text with a couple of mouse clicks in 2005.
- The VA privacy breach: More than 26 million US military veterans’ personal data — including names, birthdates, and social security numbers — were taken home by a Veterans Administration employee. As necessitated by Murphy’s Law, the data was stolen (of course). It was stored on an unencrypted drive in the employee’s laptop but, surprisingly, it seems the thieves did not know what they had and the data was not used for identity theft purposes.
- Sony may have the worst consumer security record of any corporation: The six-part Boing Boing series on Sony’s “anti-consumer technology” problems makes a compelling case for getting your technology from anyone but Sony. If you thought the 2005 Sony rootkit was the only problem, you haven’t been paying attention — the rootkit installed even if you told it not to, there was a second Sony rootkit, the rootkit remover itself caused security issues, and the RIAA said it’s no big deal because other record labels also install rootkits. Somehow, I do not find that very reassuring
Posted by Paritosh at 2:14 PM 17 comments
Thursday, December 13, 2007
Configuring a Samba Server
If you deploy a Linux-based machine to serve up files in a Windows network, you’re not going to get very far without the help of Samba. Samba is an Open source software suite that offers seamless file and print services to SMB/CIFS clients.
Basically, Samba can fool a Windows machine into thinking a Linux machine is a Windows machine. A bit of trickery yes, but it gets the job done.
Before YaST, the real trick was getting Samba to actually work. Configuring Samba required hand-editing the smb.conf file; this could be a nightmare. Now you can point-andclick your way to getting Samba running, because the good people at Novell and SuSE have worked hard to bring the Linux administrator the YaST (Yet another Setup Tool) to help. This tool makes setting up a plethora of system settings as simple as it gets. Here’s how it works.
What does Samba do?
Before we move on, let’s make sure we all know what Samba does. Samba’s magic happens thanks to a protocol suite known as the Common Internet File Sharing (or CIFS) at port 3020. At the heart of this protocol suite is the Server Message Block (SMB) protocol.
Samba is simply the open source implementation of the CIFS protocol suite. Samba allows Linux servers and workstations to talk to any Windows workstation, all the way
back to Windows 95.
Configuring Samba
To configure a Samba Server in SuSe Linux, you’ll use the YaST tool. To do so, go to the Control Center. Select Administrator Settings from the Common Tasks section to open the YaST Admin Tool. Next, select Network Services to reveal a listing of the various Network Services that can be configured from within YaST. Press the Samba Server button and you’ll see YaST’s Samba GUI.
The first thing you have to do is enter the domain to be configured. The drop-down is a bit misleading. The default, TUX-NET, is the only option available. Simply erase that option and enter your domain. Once you have applied this, press Next to take care of the final phase of initial setup.
This final phase requires you to decide if your Samba server will act as a Primary Domain Controller. Make your selection and press Next.
Once you press Next, you can’t come back to this portion of the setup without aborting the installation altogether. So make your choices wisely.
After you press Next, you are in the primary Samba configuration.
The first configuration is the Samba startup status. You can either configure Samba to start at boot or to be manually started. I highly recommend you have Samba start at boot. It will slow your boot time down a fraction of a second, but it will lessen the tasks you must handle once the server is up and running.
Once you have Samba’s boot configuration taken care of, open up the firewall for Samba. Select the Open Port In Firewall check box. If your machine has more than one network interface, press the Firewall details button to apply the firewall changes to the correct interface.
The next step is to configure the proper Samba shares. Press the Shares tab, to reveal this configuration.
The Shares tab allows you to configure every aspect of the Samba shares. You can go beyond just enabling or disabling each share, of course. By highlighting a share and pressing the Edit button, you can further customize each share configuration.
Let’s take a look at configuring the users share. Highlight that share and press Edit. A new window reveals five pre-configured options.
Obviously, the default settings will not work for most, and there are a lot of possible options to add. Let’s take a look at the default options and what they are:
Obviously, there are quite a few more options to be added.
If you press the Add button, a small window will appear with a drop-down list. That drop down list contains 124 other options to add and configure. Once you find the option you want to add, select it and press OK. Some of the new options will have another configuration window to edit before the option is added. Say, for instance, you want to add admin users. Click the drop-down and highlight admin users.
Press OK and the second window will open to enter the admin username.
When you press OK, you’ll be taken back to the initial shares screen, but the admin user will be listed among the options. After you have completed the configuration of this section, press OK to move on.
Another option in the Shares tab is to enable to users to share their home directory. This is important: If you enable this feature, every user’s home directory will be made available. If this server is used frequently by users, then privacy can become an issue. If you decide to use this feature, make sure your users are made aware of it.
Finally, the Identity tab, shown in Figure E on page 14, allows you to further specify the identity and role of the Samba server.
Two of the three configuration options should be familiar from earlier configurations. The final of the three, NetBIOS name is just the name the machine will be seen as on the shared network. If you want the server to be seen as “Department X” then enter Department X in this option.
You may also undertake some advanced settings from this tab. From the Advanced Settings drop-down, you can select either Expert Global Settings or User Authentication Settings. The Expert Global Settings, allow you to fine-tune settings for printing, security, and log-in.When you press the Edit button, the majority of the options in the Global Settings configurations are text-field entries.
If you’re familiar with hand-editing smb.conf files, you’ll recognize a number of the configurations. One of the most important configurations you’ll make here is the security option. This is how your users will authenticate to your Samba server. There are five possible settings:
The other Advanced Settings tab, User Authentication Sources, is simply a way for you to define where Samba finds the resource file to authenticate users. There are four different types:
Obviously, this configuration will depend completely on your network setup. The default option is smbpasswd File. If you press the Edit button (with that option highlighted), you can then enter the location of the password file used.
Make the connection
With all of these options complete, you are ready to complete the configuration by pressing the Finish button. This will save all of your configurations and start the Samba services. If your configuration is successful, you can now log into your Samba server from your Windows machines. Just connect to the Linux server from the Windows workstation in Explorer using the standard \\servername syntax.
Posted by Paritosh at 2:15 PM 2 comments
Tuesday, December 11, 2007
Configuring Linux using a GUI
Many hardcore Linux users would shudder at the thought of configuring Linux network services using a GUI. A solid argument could be made that a GUI has no place being on a server in the first place. Servers are just supposed to sit quietly in the corner and do their job by themselves without user interaction. GUIs, by definition, are designed to make user interaction easier. A GUI adds needless overhead to a machine that’s not supposed to be interacting with users from its own console. Therefore, you should keep a GUI off of the server and configure services to run from a command line.
Although it’s practically sacrilegious, using a GUI for configuring servers can make sense in some cases. Primarily, using a GUI can help network administrators who aren’t familiar with Linux learn to set up network services faster. Many network administrators come from a Windows background, where practically everything is point-and-click. Although they need to earn new tools, the old Windows skills can more easily be translated to Linux through GUI tools.
Even for seasoned Linux users, trying to figure out the locations, layouts, and choices of configuration files that need to be maintained can be a chore. Some services can use three or four different .conf files. A slight error in the file can cause the service to fail. If the error was overlooked, a lot of time can be lost to troubleshooting. GUI tools that automatically find and populate the corresponding .conf files can end confusion and decrease the chance of errors.
GUI configuration options
Linux gives you several options when it comes to GUI-based network administration. Since the distribution we’ve chosen to use in this series revolves around SuSE 10.2, the major GUI configuration tool you’ll use is YaST. Other distributions have their own tools, but YaST is very well-organized, with an easy-to-follow arrangement.
YaST does a lot, but it doesn’t do it all. For those services YaST can’t control, we’re going to use Webmin, an add-on tool which allows you to control Linux services from inside of a Web browser. This means you have to learn how to use another tool, but it’s still easier than doing configurations from the command line.
A quick look around YaST
Although it is contrary to what many Linux admins would advise, I’m going to log into my SuSE 10.2 machine as root for this setup. I don’t do this often, but it saves me from having to enter the root password each time I perform an administration task.
Once you are done setting up these services, log out.
The first thing you’ll want to do is to select the Computer menu
From the menu, select Control Center.
From the Common Tasks section, select Administrator Settings to open the YaST
Admin Tool. You’ll see a screen similar to the figure below
Select Network Services to reveal a listing of the various Network Services that can
be configured from within YaST.
Working with Webmin
There are a number of ways to go about the installation of Webmin, but the easiest and most consistent method of installing Webmin is from source. To get the source tarball, go to sourceforge site for the latest release. Once you have that file downloaded, you are going to untar the archive with the command tar xvzf webmin-1.310.tar.gz.
Now cd into the newly created webmin-1.310 directory. Inside this directory is the setup script to install Webmin. From within this directory, run the command ./setup.sh /var/www/html/webmin (where /var/www/html/webmin is the directory you wish to install
Webmin into).
Note: The /var/www/html/webmin directory does not have to exist, because the Webmin setup script will create it for you.
While the installation script is running, it is going to ask you the following:
- Webmin configuration directory
- The location at which Webmin will store logs
- Path to Perl
- Your server OS (Webmin tries to detect this)
- The port Webmin will run on (defaults to 10,000)
- The username and password to log in to Webmin
- Your server’s hostname (Webmin tries to detect this)
- SSL usage; should only prompt if Perl’s SSL libraries are installed (this author has not run Webmin under SSL)
- Whether you want Webmin to start with system boot (highly recommended)
I tried using the root username and password for my system; it worked. I attribute
this to Webmin being previously installed (but not run) via RPM. After the installation
script completed, it informed me:
Webmin has been installed and started successfully. Use your web browser to go to
http://localhost.localdomain:10000/
and login with the name and password you entered previously.
Because Webmin uses SSL for encryption only, the certificate it uses is not signed by one of the recognized CAs such as Verisign. When you first connect to the Webmin server, your browser will ask you if you want to accept the certificate presented, as it does not recognize the CA. Say yes.
The directory from the previous version of Webmin /usr/libexec/webmin Can now be safely deleted to free up disk space, assuming that all third-party modules have been copied to the new version.
The last section of the presented information was a good hint as to why I was not given the chance to set up an admin.
Now that Webmin is installed, it’s time to take a peek around and see what it has to offer.
Logging in
As stated above, you may have to log in with your root username and password.
Once logged in, you will be greeted with the Webmin main page.
From there, the first place to visit is the Webmin Configuration screen.
Security configurations
From within the Webmin configuration screen, there are a number of items you will want to set up. Obviously, security for such a tool is high on the list. Select the IP Access Control link to set up a list of allowed or denied hosts; this prevents password guessing. You may have set up a rigid password that’s a mixture of alpha and numeric characters (as well as upper and lower case), but eventually someone’s going to crack it.
To add one more layer of security, set up this list so you allow only specific IP addresses to access the tool. Make sure you include any known safe IP address that will be needing access to the Webmin interface. All other hosts are denied.
Along this same line of security, select the Trusted Referrers link. From here, you can configure Webmin’s referrer-checking support, which ensures that malicious links from other sites cannot trick your browser into doing dangerous things with Webmin. In this section, there is a text area where you can enter trusted sites, a radio selection, and a check box. The radio selection allows you to choose to Enable Referrer Checking, and the check box allows you to select to Trust Links From Unknown Referrers.
From everything I’ve read and experienced, the default configuration for Webmin is pretty secure. For those working with mission-critical servers, however, it might befit you to uncheck the Trust Links From Unknown Referrers box, and configure some trusted Web sites.
The next step in securing Webmin is enabling the system to use SSL tunnels; this will allow remote login without passing unencrypted passwords across the ether. However, there are steps that must be taken before this feature can be used. First, OpenSSL must be installed; on many newer distributions, this is already taken care of. If not, then download the most recent OpenSSL from rpmfind and run the command (as root) rpm -ivhopenssl-XXX.rpm (where XXX is the release number).
With OpenSSL installed, you must install the Net::SSLeay Perl module. Download this module from the Net::SSLeay site, untar the archive with the command tar xvzf Net_SSLeay.pm-XXX.tar.gz (where XXX is the release number), change into the newly created Net::SSLeay directory, run the command perl Makefile.PL, and run the command make install.
To test the installation, run the command:perl -e ‘use Net::SSLeay’. If no errors are reported, you are good to go.
Select the SSL Encryption link from within the Webmin Configuration page, and you should see the following text, indicating SSL is working properly:
The host on which Webmin is running appears to have the SSLeay Perl module installed.
The first thing you want to verify is whether Enable SSL If Available? is checked. If it is, then you should now be able to log in to your Webmin site with the URL https://localhost.localdomain:10000/.
Your Webmin login is now encrypted.
Webmin users
Creating Webmin users is a very important task and should not be taken lightly. It’s necessary to grant users access to various aspects of your Webmin server (especially if your company’s server farm can not be administered by one person alone).
However, as in any good UNIX environment, users should be created and maintained wisely. To make this an easier task, I suggest creating groups to suit your needs. Say, for example, you have an IT team that needs access to the Webmin interface. From the Webmin main menu, select Webmin Users. Inside this page, Webmin Groups can be administered. Select Create New Webmin Group to create a new group.
From the list of options, select which modules the IT group needs to have access to, and press Save. Now, go to the Create Webmin User section, and create a new user. During this configuration, select the IT group from the Member Of Group list. There are some nice configuration options here, such as allowing users access to the site only on given days and times. Once you Save, the user will be created, and the user will inherit all of the options from the IT group.
Install and configure Windows Server 2008 core
With the imminent launch of Microsoft Windows Server 2008 coming on February 27, 2008, I want to show you a feature I am fond of in this new operating system. With Windows Server 2008, you have the option of performing a Windows Server Core installation, which provides you with the minimum set of tools to run Windows.
You are provided with a kernel and a command line to manage the server. It is slim and bare bones and allows you to configure Windows concisely. This type of installation is perfect for a datacenter. I am really excited about this feature.
Installation
When you first run through the installation of Windows Server 2008, you have two options for installation. They are:
- Windows Server 2008 Enterprise (Full Installation)
- Windows Server 2008 Enterprise (Server Core Installation)







After the installation, the main window for your new installation appears and you are ready to login as shown in the figure.

The initial login is Administrator and blank password. You are required to change the password and set an Administrator password on initial login.

Now you are logged in.

You are ready to configure the date, time, and time zone. In the command line type the following: controltimedate.cpl and set the options accordingly.

If you need to configure and change the keyboard layout and settings, type the following in the command window: control intl.cpl

Let’s move on and change the server name. The default name is a bunch of random letters and numbers and I would like to change the name to a local standard. You can view the current hostname by typing the following:
c:/window/ssystem32>hostname
Now let’s use the name ssw-svr15. We will perform this option in the command line by typing the following:
c:/windows/system32>netdom renamecomputer %computername% /NewName:ssw-svr15

After choosing to proceed, the task completes successfully. You now need to reboot the server using the shutdown command. For the proper syntax, type:
shutdown /?
After reviewing the syntax, I will type the following: shutdown /r (switch for shutting down and restarting the computer) /t 10 (wait 10 seconds to shutdown and restart) /c “Changed Server Name” (add comment of max 512 characters). They total syntax will look as follows:
shutdown /r /T 10 /C "Changed Server Name"

Let’s now configure our networking so we can join this server to a domain. In order to see what interface you have to configure, type
netsh interface ipv4 show interface

The Local Area Connection that we are going to configure has an index value of two. Let’s proceed and configure TCP/IP for this connection. Type the following command to set the TCP/IP information:
netsh interface ipv4 set address name="2" source=static address=192.168.1.199 mask=255.255.255.0 gateway=192.168.1.1

Follow the same example to configure DNS:
netsh interface ipv4 add dnsserver name="2" address=192.168.1.1 index=1

If you type ipconfig /all, you will see the newly added information.

Let’s join it to a domain! In order to perform this function, we will take advantage of the netdom.exe. The syntax is as follows:
netdom join ssw-svr15 /domain:watchtower /userd:Administrator /passwordD:Password01
Note: Do not forget to reboot the server using the following command:
shutdown /r /T 10 /C "Added to domain"

As a final step, we should not forget to activate the server by typing the following:
slmgr.vbs -ato

This doesn’t even scratch the surface of what you can do with a Windows Server Core installation but it begins to show you how powerful command line is with a small Windows kernel. With the popularity of virtualization and server consolidation, having the ability to virtualize a server core installation and attach a single role will become very popular with the datacenter.
Posted by Paritosh at 2:29 PM 56 comments
Labels: Guides