Wednesday, December 13, 2006

Warning UTI Bank Subject to Phishing Attacks !!!

Dear Valued UTIBank Customer,



UTI's Internet Banking, is hereby announcing the New Security Upgrade.
We've upgraded our new SSL servers to serve our customers for
a better and secure banking service,against any fraudulent activities.Due to
this recent upgrade, you are requested to provide your account information.
For secure transaction also include your Transaction Id & password
by following the reference below.

Reference*
http://www.utibank.co.in/account.update.asp?D=signonasp35bh53967hut763d




RegardsUTI Customers Service

UTI Bank LTD.


The link actually points to http://www.circolidistudiovaldicecina.it/uti/signn.htm
And i must say a very well done job but A "Reasonable" person would not notice as to
why they are being asked for the transaction & login password at the same time!


BEWARE!!!




Tuesday, September 12, 2006

Gmail - Offline Temporarily

Well at about 10.00 IST I saw a error on the gmail site when i tried to sign in.
It said

"Gmail is temporarily unavailable. Cross your fingers and try again in a few minutes. We're sorry for the inconvenience."


Is gmail cooking up some thing new ??

Tuesday, July 18, 2006

How to restore a hacked Linux server | MDLog:/sysadmin

How to restore a hacked Linux server | MDLog:/sysadmin: "How to restore a hacked Linux server Yesterday
Posted by - Marius - in : Linux, Security , trackback
Every sysadmin will try its best to secure the system/s he is managing. Hopefully you never had to restore your own system from a compromise and you will not have to do this in the future. Working on several projects to restore a compromised Linux system for various clients, I have developed a set of rules that others might find useful in similar situations. The type of hacks encountered can be very variate and you might see very different ones than the one I will present, or I have seen live, but even so, this rules might be used as a starting point to develop your own recovery plan.

In most cases if you have a system compromise at root level, you will hear that you have to fully reinstall the system and start fresh because it will be very hard to remove all the hidden files the attacker has placed on the system. This is completely true and if you can afford to do this then you should do it. Still even in this case the compromised system contains valuable information that can be used to understand the attack and prevent it in the future.

Here is a short overview of the steps that I will present:

Don’t panic. Keep your calm and develop a plan of actions
Disconnect the system from the network
Discover the method used to compromise the system
Stop all the attacker scripts and remove his files
Restore not affected services
Fix the problem that caused the compromise
Restore the affected services
Monitor the system
Don’t panic. Keep your calm and develop a plan of actions
Ok. You have just found out that you have to restore a hacked system. My first suggestion is to remain calm. Don’t rush and do something you will regret later. Why? Of course you will have to take action as soon as possible, but you can assume that the compromise is probably active for some time and if you act in second 1 or minute 10 this will probably not make much of a difference. If you have experience with such situations and have a proper plan in your mind, go for it, and don’t waste any time. If not, just relax, take 5 minutes to think about what you should do and how to solve this problem.
Example of bad actions during this time: you will rush and kill all the running scripts the attacker has launched and then have a timeout when you will think what to do next… In this time the attacker might see you have discovered him (for example from his irc bot, etc.) and might become upset and clean up the system for you…
Of course you should not go on with your planned trip and take care of this when you get back, I am just saying to use 5-10 minutes to think on this and develop a short action plan. There should be no timeout in your actions and you should always know what is the next step.

Disconnect the system from the network
This might not always be possible. But if you have physical access to the system or even if you are remotely on a system in a datacenter that provides a way to connect from a console (either a regular remote console, or a KVM, or a DRAC card in Dell servers, etc.), then this should be the next step. Connect to the remote console and bring down the network interface.
If you don’t have a remote console, here are some other ideas: you might be able to rent a KVM for a limited time from your datacenter, or you might have to write some iptables rules to block any kind of access besides your own IP.
After this your system will appear down to everyone, including the attacker that will see that the system is completely down.

Discover the method used to compromise the system
This step in my opinion is the most important one and you should not proceed any further until you have a proper answer to the question: ‘How did the attacker get in?’ This step will be probably the most time consuming as the type of attacks can be quite variate. Still if you don’t find out how the attacker got in, then you will risk that you place the system online and he will be able to compromise the system in a matter of minutes. And this time he might not be so nice and you will not have anything to restore… So even if there is not a general method for this, here are some ideas to get you started:

Depending from what tools you have already configured, you need to identify the files uploaded on the system:

if you have a system like tripwire configured use it to find out what files where added/changed.
if you don’t have any such system installed, you might have to use the find command to search for the files newer than x days that were added to the system (also changed files in the respective interval).
Who owns the uploaded files?

finding out the owner of the files will probably show you what application was used to get in. For example files uploaded as the web user will indicate that the web service was used to get in.
Investigate the uploaded files.

the files that were uploaded on the system might provide valuable information about the attack. For example the attacker might use the same exploit to attack other systems from your compromised server. This can quickly show you what exploit he is using.
Get as much information from the running scripts launched by the attacker.

as you have seen I have not recommended to stop yet the running scripts that the attacker might have launched. Why? Because they contain invaluable information to identify the attack. Use lsof on them (lsof -p PID) to see useful details. Where are they located? what user owns them? You might find the source of the attack from this information.
Use Rootkit detection tools.

you might want to run some rootkit detections tools like rkhunter or chkrootkit to quickly identify common attacks.
Investigate system logs.

with the information gathered by now you might reduce the size of the log information you have to investigate.
Hopefully you have found the source of the attack by now. Again this is very dependent of the type of attack. The most common one you might see these days is an exploit using a vulnerable web application and an attacker that will launch various scripts (irc bots, scanning tools, attacks on other systems, etc.). Still you might see something different like an attacker not launching any script, loading kernel modules to hide its tracks, to make it more difficult to identify or even see the compromise.

Stop all the attacker scripts and remove his files
Once you have identified the cause of the attack you can safely kill all the running scripts launched by the attacker and remove all his files (save them in a different location for further investigation). At this point we no longer need the scripts running as we got the information we could find from them. The system is still unavailable at this point and no service is available to the world.

Don’t forget to clean-up also the locations that the attacker might have entered to start his scripts on system reboot. Look in init scripts, rc.local, cron tabs for this.

Restore not affected services
Once we know the service used for the attack we can stop it and restore the network connection and all the unaffected services the system might provide. For example if the web server was used to get in, we can stop it, and restore other services like mail, dns, to minimize the downtime of the system.

Fix the problem that caused the compromise
Before starting the service used to compromise the system you should fix the problem so this will not happen again once the service is open to the public… Depending on the problem this might involve: patching a vulnerable daemon, upgrading a vulnerable web application (or temporary disable it), writing some special rulers to block it (for ex. mod_security rules might help in case of no patch available for a web application), etc.

Restore the affected services
Once you have fixed the problem you can restart the service used to get inside the system.

Monitor the system
Now is the time to monitor closely the system and see if the fix you have implemented is working. Most certainly the attacker will try to get in again as he will see he has ‘lost’ the system. If you notice any problem stop the service at once and reiterate again to the step to fix the problem (stop the service, fix it, restore it).

Conclusion
These steps are obviously not usable all the time because of the variety of attacks you might encounter, but they can be used as a baseline to develop your own plan of actions. Again in such situations, keep your calm, don’t rush, and work to restore the system based on a clear set of steps.
If you had similar experiences please feel free to share your own tips to help others that might find themselves in such a situation.



Tags: linux, security"



(Via DUCEA

Sunday, April 30, 2006

Designing a database-driven PHP App? Don't Forget the Data!!

Designing a database-driven PHP App? Don't Forget the Data!!
If you have a sourceforge account, and are on your way to becoming the best thing to happen to the web since Yahoo or Google, then I beg of you to put a call out for people who understand database design fundamentals.

Designing an interface with PHP is one thing. Designing an “application” is quite another, as it includes designing the architecture of the application, how the various components of the application will interact and communicate, and also how the data used by the application will be managed and stored. This last piece is a decidedly un-sexy part of application design, and is often also (and unfortunately) trivialized by developers.

Here’s the story: If you’re a PHP developer, I don’t really want you to learn how to design a database. No, really. I don’t. I want you to write PHP. There are few people who do both things extremely well, because both take a good bit of time. If you’re a PHP developer, I want you to write code that’ll make my head spin. However, the path to greatness is to be conscious of your own ignorance — so just acknowledge that you’ve never done or studied database design, and go find someone who has!

I have NEVER seen a help wanted ad on Sourceforge for data designers. I’ve looked. I occasionally go back and repeat my searches. I’m always disappointed. Meanwhile, while there are applications that practice sane database design fundamentals, the overwhelming majority of applications I’ve evaluated have also left me disappointed.

I believe that this problem roots from a few different problems and misconceptions, which I’ll list here:

1. The interface determines database design

I disagree. I think approaching a database this way is somewhat short sighted. Doesn’t that statement imply quite strongly that any future changes in the interface will inevitably mean a change in the database design? And then, if we’ve agreed that interface changes affect database design, doesn’t that raise the chances that future releases are either going to be incompatible with earlier ones, or will involve laborious database migration schemes?

Furthermore, this notion breaks down quite quickly when you get into data warehousing schemes where more than one application is acting on the data. Then what? Which interface is in charge of defining the database?

Really, database design and interface design should only be related through the data entities themselves. The data model is the story of how data is related, and the interface exploits the relationships to provide a view of the data to the user. Aside from that, interface design and interface design are unrelated.

2. “I only need a small, simple database”

Maybe that’s true, but double check yourself. If you think that you only need three tables, but those tables are a mish-mash of duplicate data, null fields, or (the worst) fields that hold more than a single value, then your database design would appear to be the result of making SELECT statements as simple as possible instead of sane design principles. As your application marches ahead, you’ll eventually find that you need to overhaul the database, or scrap it completely and start over, which can be a pretty hellacious event to deal with as a developer. That database API you worked so hard on is all but useless, and it could’ve all been avoided if you’d just run all of this by a database person. In a lot of the instances where I see database design completely fail, it’s because the designers confused “data design” with minimizing the number of tables involved. In reality, more normalized data usually results in more tables, not less.

3. “More tables makes for harder coding”

Well, fewer tables, if it’s at the cost of normalization, makes for certain refactoring when you have to overhaul your data design because it can no longer support the needs of the application. Take your pick.

Really, it doesn’t have to be harder to code against in a lot of cases. Usually, inserts end up being slightly more complex. However, if you’re using a database that supports database “views”, then select statements will hardly have to change at all. If you don’t know what views are or how to create them, you should definitely seek the help of a database professional.

In Conclusion

Look, I’m not an advocate of using fifth normal form for everything. In fact, I’ve done a lot of database consulting and have never seen a database in fifth normal form. However, shooting for third normal form generally results in a database that is flexible enough to move in whatever direction your application decides to move, while at the same time keeping things simple enough to keep your PHP coders from having to become hardcore SQL wizards. Data design is a dry, monotonous, maybe masochistic practice. But it’s one that will pay you dividends well into the future.

BSD DevCenter 4/30/06 11:18 AM webmaster@oreillynet.com (0_o)

(Via BSD DevCenter.)

Friday, April 28, 2006

Introducing OpenBSD 3.9

Introducing OpenBSD 3.9: "Bob Huvane let us know about an article on informit.com regarding the upcoming release of 3.9. The article goes into some of the new features available in 3.9, and the ongoing financial situation among other things."



(Via OpenBSD Journal.)

Tuesday, March 28, 2006

Computerized Airport Screening

It seems like every time someone tests airport security, airport security fails. In tests between November 2001 and February 2002, screeners missed 70 percent of knives, 30 percent of guns and 60 percent of (fake) bombs. And recently, testers were able to smuggle bomb-making parts through airport security in 21 of 21 attempts. It makes you wonder why we're all putting our laptops in a separate bin and taking off our shoes. (Although we should all be glad that Richard Reid wasn't the "underwear bomber.")

The failure to detect bomb-making parts is easier to understand. Break up something into small enough parts, and it's going to slip past the screeners pretty easily. The explosive material won't show up on the metal detector, and the associated electronics can look benign when disassembled. This isn't even a new problem. It's widely believed that the Chechen women who blew up the two Russian planes in August 2004 probably smuggled their bombs aboard the planes in pieces.

But guns and knives? That surprises most people.

Airport screeners have a difficult job, primarily because the human brain isn't naturally adapted to the task. We're wired for visual pattern matching, and are great at picking out something we know to look for -- for example, a lion in a sea of tall grass.

But we're much less adept at detecting random exceptions in uniform data. Faced with an endless stream of identical objects, the brain quickly concludes that everything is identical and there's no point in paying attention. By the time the exception comes around, the brain simply doesn't notice it. This psychological phenomenon isn't just a problem in airport screening: It's been identified in inspections of all kinds, and is why casinos move their dealers around so often. The tasks are simply mind-numbing.

To make matters worse, the smuggler can try to exploit the system. He can position the weapons in his baggage just so. He can try to disguise them by adding other metal items to distract the screeners. He can disassemble bomb parts so they look nothing like bombs. Against a bored screener, he has the upper hand.

And, as has been pointed out again and again in essays on the ludicrousness of post-9/11 airport security, improvised weapons are a huge problem. A rock, a battery for a laptop, a belt, the extension handle off a wheeled suitcase, fishing line, the bare hands of someone who knows karate ... the list goes on and on.

Technology can help. X-ray machines already randomly insert "test" bags into the stream -- keeping screeners more alert. Computer-enhanced displays are making it easier for screeners to find contraband items in luggage, and eventually the computers will be able to do most of the work. It makes sense: Computers excel at boring repetitive tasks. They should do the quick sort, and let the screeners deal with the exceptions.



Sure, there'll be a lot of false alarms, and some bad things will still get through. But it's better than the alternative.

And it's likely good enough. Remember the point of passenger screening. We're not trying to catch the clever, organized, well-funded terrorists. We're trying to catch the amateurs and the incompetent. We're trying to catch the unstable. We're trying to catch the copycats. These are all legitimate threats, and we're smart to defend against them. Against the professionals, we're just trying to add enough uncertainty into the system that they'll choose other targets instead.

Remember that the terrorists' goals have nothing to do with airplanes; their goals are to cause terror. Blowing up an airplane is just a particular attack designed to achieve that goal. Airplanes deserve some additional security because they have catastrophic failure properties: If there's even a small explosion, everyone on the plane dies. But there's a diminishing return on investments in airplane security. If the terrorists switch targets from airplanes to shopping malls, we haven't really solved the problem.

What that means is that a basic cursory screening is good enough. If I were investing in security, I would fund significant research into computer-assisted screening equipment for both checked and carry-on bags, but wouldn't spend a lot of money on invasive screening procedures and secondary screening. I would much rather have well-trained security personnel wandering around the airport, both in and out of uniform, looking for suspicious actions.

When I travel in Europe, I never have to take my laptop out of its case or my shoes off my feet. Those governments have had far more experience with terrorism than the U.S. government, and they know when passenger screening has reached the point of diminishing returns. (They also implemented checked-baggage security measures decades before the United States did -- again recognizing the real threat.)

And if I were investing in security, I would invest in intelligence and investigation. The best time to combat terrorism is before the terrorist tries to get on an airplane. The best countermeasures have value regardless of the nature of the terrorist plot or the particular terrorist target.

In some ways, if we're relying on airport screeners to prevent terrorism, it's already too late. After all, we can't keep weapons out of prisons. How can we ever hope to keep them out of airports?

- - -
Bruce Schneier is the CTO of Counterpane Internet Security and the author of Beyond Fear: Thinking Sensibly About Security in an Uncertain World. You can contact him through his website.

Snort on OpenWrt: Guarding the SOHO perimeter

Snort on OpenWrt: Guarding the SOHO perimeter

Monday March 27, 2006 (03:01 PM GMT)

By: Joe Barr

If you're edgy about security for your SOHO LAN, you might want to consider moving your first line of defense out past your firewall. How about on your router, for example? If your router runs OpenWrt, you can do exactly that, by running Snort, the open source intrusion detection system (IDS) project that has become the most widely deployed IDS in the world. Throw in the firewall that comes out of the box with OpenWrt White Russian, and suddenly the perimeter seems a lot more secure.

Nicholas Thill -- known as Nico in the OpenWrt community -- maintains three separate packages for Snort in his repository of packages. They include a plain Jane version, without any support for logging to a database, and two database-specific packages: one for MySQL and one for PostgreSQL. All are based on the Snort release 2.3.3-1 and are considered to be in a testing state and not yet included in the official release.

For the sake of simplicity, I'll discuss the plain Jane installation in this article. Regardless of which version you select, you need to be aware of the fact that you can overload and/or potentially crash your OpenWrt router by running Snort wide-open with all its rule sets and preprocessors (rule sets look for specific signatures, while preprocessors are plugin modules that extend Snort's capabilities), or simply by logging Snort's output to the local system and filling up all available space.

OpenWrt is a wonderful distribution, but it often runs on systems with serious memory and/or storage constraints, which you can easily overload by running Snort with all the trimmings.

Syslog remotely

Snort reports its findings in log records, so running Snort without saving them for later analysis is like typing a book without putting paper in the typewriter: you go through a lot of motions but don't get much of a return for your efforts. Given the typical router's constraints both in processing power and storage space, it makes sense to log Snort's findings remotely.

In order to start syslog logging remotely, you'll need to make changes to your configuration both on the router and on the system where the logging will be done. It's a snap to set up remote logging on OpenWrt, as explained in this Mini-HOWTO on the OpenWrt wiki.

From the OpenWrt command line, enter the following:

nvram set log_ipaddr=<192.168.1.101>
nvram commit

Change the IP address to match the address of the system running syslogd. Then edit /etc/initab and add these two lines:

::respawn:/sbin/syslogd -n
::respawn:/sbin/klogd -n

And finally, edit /etc/init.d/rcS to add:

mkdir /var/log

To handle the logging on the remote side of the connection, add the -r option to the command line that starts syslogd and you're good to go. If you're using Ubuntu, for example, edit /etc/init.d/sysklogd and change the line that reads:

SYSLOGD="-u syslog"

to read:

SYSLOGD="-r -u syslog"

Of course, if you're like me and think that syslogd is so last generation, you can install syslog-ng instead, which accepts remote logging by default.

Installing and Configuring Snort

The easiest way to install the version of Snort is with the OpenWrt Admin Console. But before you do that, check /etc/ikpg.conf on the router and make sure the repository mentioned above is included as a source. If it's not, add this line to the file:

src nico-t http://nthill.free.fr/openwrt/ipkg/testing

Then click on System and Installed Software in the OpenWrt Admin Console and refresh the list of available packages by clicking on Update package lists. All that's left to do then is scroll down the list of packages, find the version of Snort you want, and click on Install next to it.

Before you configure Snort, you'll need to get some rules from the Snort site. Snort rules define the packets that Snort should identify and take action on, and the actions that should be taken.

Rather than downloading only the rules included in the default OpenWrt snort.conf file, I downloaded a full set and put them in /etc/snort/rules. That way, I don't have to get new rule sets each time I tweak snort.conf.

You'll need to define the HOME_NET variable near the top of /etc/snort/snort.conf, and also define an output method near the bottom. Once you've done those two things, Snort should be ready to run, except for whatever tweaking you need to do for preprocessors and rules.

The pre-configured version of snort.conf, for example, comes with almost all the preprocessors commented out. To activate them, simply remove the # signs from the beginning of each line of the section for the preprocessor you want. The same thing is true for the rules. Note: Remember to keep an eye on memory usage as you activate preprocessors and rule sets.

My HOME_NET in snort.conf already looked like this, so I kept it:

var HOME_NET 192.168.1.0/24

For the output option, I removed the # from this line:

# output alert_syslog: LOG_AUTH LOG_ALERT

Those two changes made, I started snort running by entering snort -i vlan1 & and it blasted off, producing the following on my OpenWrt console:

root@OpenWrt:~# Running in IDS mode with inferred config file: /etc/snort/snort.conf

Initializing Network Interface vlan1

--== Initializing Snort ==--
Initializing Output Plugins!
Decoding Ethernet on interface vlan1
Initializing Preprocessors!
Initializing Plug-ins!
Parsing Rules file /etc/snort/snort.conf

+++++++++++++++++++++++++++++++++++++++++++++++++++
Initializing rule chains...
,-----------[Flow Config]----------------------
| Stats Interval: 0
| Hash Method: 2
| Memcap: 10485760
| Rows : 4099
| Overhead Bytes: 16400(%0.16)
`----------------------------------------------
No arguments to frag2 directive, setting defaults to:
Fragment timeout: 60 seconds
Fragment memory cap: 4194304 bytes
Fragment min_ttl: 0
Fragment ttl_limit: 5
Fragment Problems: 0
Self preservation threshold: 500
Self preservation period: 90
Suspend threshold: 1000
Suspend period: 30
Stream4 config:
Stateful inspection: ACTIVE
Session statistics: INACTIVE
Session timeout: 30 seconds
Session memory cap: 8388608 bytes
State alerts: INACTIVE
Evasion alerts: INACTIVE
Scan alerts: INACTIVE
Log Flushed Streams: INACTIVE
MinTTL: 1
TTL Limit: 5
Async Link: 0
State Protection: 0
Self preservation threshold: 50
Self preservation period: 90
Suspend threshold: 200
Suspend period: 30
Enforce TCP State: INACTIVE
Midstream Drop Alerts: INACTIVE

Stream4_reassemble config:
Server reassembly: INACTIVE
Client reassembly: ACTIVE
Reassembler alerts: ACTIVE
Zero out flushed packets: INACTIVE
flush_data_diff_size: 500
Ports: 21 23 25 53 80 110 111 143 513 1433
Emergency Ports: 21 23 25 53 80 110 111 143 513 1433
X-Link2State Config:
Ports: 25 691
112 Snort rules read...
112 Option Chains linked into 57 Chain Headers
0 Dynamic rules
+++++++++++++++++++++++++++++++++++++++++++++++++++

Warning: flowbits key 'tls1.client_hello.request' is checked but not ever set.
Warning: flowbits key 'sslv3.client_hello.request' is checked but not ever set.

+-----------------------[thresholding-config]----------------------------------
| memory-cap : 1048576 bytes
+-----------------------[thresholding-global]----------------------------------
| none
+-----------------------[thresholding-local]-----------------------------------
| none
+-----------------------[suppression]------------------------------------------
| none
+------------------------------------------------------------------------------
Rule application order: ->activation->dynamic->alert->pass->log
Log directory = /var/log/snort

--== Initialization Complete ==--

,,_ -*> Snort! <*-
o" )~ Version 2.3.3 (Build 14)
'''' By Martin Roesch & The Snort Team: http://www.snort.org/team.html
(C) Copyright 1998-2004 Sourcefire Inc., et al.

To make sure Snort was logging to the remote machine, I checked the syslog there and found these two new entries in /var/log/syslog:

Mar 2 15:40:44 192.168.1.1 kernel: vlan1: dev_set_promiscuity(master, 1)
Mar 2 15:40:44 192.168.1.1 kernel: device vlan1 entered promiscuous mode

Before making any big changes to the rules or preprocessors, I wanted to have baseline measurement of how much system resources Snort was eating in terms of memory and CPU, so I asked top. Top said:

Mem: 18420K used, 12164K free, 0K shrd, 896K buff, 4664K cached
Load average: 1.00, 1.01, 0.76 (State: S=sleeping R=running, W=waiting)

PID USER STATUS RSS PPID %CPU %MEM COMMAND
571 root R 436 1 98.4 1.4 vi
899 root R 412 561 0.7 1.3 top
898 root S 7916 561 0.3 25.8 snort
560 root S 640 537 0.3 2.0 dropbear
890 root S 640 537 0.0 2.0 dropbear
561 root S 464 560 0.0 1.5 ash
891 root S 460 890 0.0 1.5 ash
530 nobody S 436 1 0.0 1.4 dnsmasq
49 root S 428 1 0.0 1.3 syslogd
537 root S 420 1 0.0 1.3 dropbear
379 root S 400 1 0.0 1.3 udhcpc
1 root S 392 0 0.0 1.2 init
55 root S 392 1 0.0 1.2 init
541 root S 388 1 0.0 1.2 httpd
50 root S 340 1 0.0 1.1 klogd
542 root S 300 1 0.0 0.9 telnetd
3 root SWN 0 1 0.0 0.0 ksoftirqd_CPU0
7 root SW 0 1 0.0 0.0 mtdblockd
6 root SW 0 1 0.0 0.0 kupdated
4 root SW 0 1 0.0 0.0 kswapd
32 root SWN 0 1 0.0 0.0 jffs2_gcd_mtd4
5 root SW 0 1 0.0 0.0 bdflush
2 root SW 0 1 0.0 0.0 keventd

Right out of the box, and with only minimal rules in place, Snort was eating 25% of system memory. I added rules and preprocessors, primarily for the detection of scans, but I've tried to avoid taking more than 50% of memory or to have less than 1000K free memory. So far, so good, and with no impact on performance of the router. But remember, you can overload the router if you're not careful, so keep a watchful eye on available resources as you tweak the config.

After I enabled the scan detection preprocessors and added a couple of additional rule sets, Snort's memory consumption climbed to 49.3% and the amount of free memory had shrunk to just over 5000K. I decided to stop there.

You might consider installing the plain Jane version first, then moving to one of the database-specific versions if you like. But if you do, remember that changing versions requires more than simply changing your snort.conf to indicate the database: you have to remove the plain Jane version of Snort and then install the database version. That process will replace your snort.conf, so if you want to keep your old one, make a copy before you install the new version of Snort.

For further information about Snort on OpenWrt, see this report by David Schwartzburg.

Monday, March 27, 2006

The World's Most Maintainable Programming Language: Part 1

The World's Most Maintainable Programming Language: Part 1: "

Have modern programming languages failed? From the point of view of learnability and maintainability, yes! What would a truly maintainable and learnable programming language look like? This is the first of a five-part series exploring the future of programming languages. I'll publish the rest throughout the week. (I'll enable comments when I publish the final part, this Friday -- so you can read my entire thesis in context.)

"



(Via BSD DevCenter.)

Friday, March 17, 2006

The Open Source Business Model

The Open Source Business Model: "How do open source projects sustain themselves? In the financial sense, I mean. Sometimes, a large corporation will offer a project funding. Or perhaps they will hire the primary developer and allow him to continue working on the project, as Microsoft has done with IronPython. But what about projects who aren't fortunate enough to gain that class of corporate funding? Or who aren't actively seeking it?

Most open source authors that I know of aren't writing their software for the money they hope to make with it. They work on it because they love it and would do it even if there is no hope of gaining funding of any sort. They work on their projects in their spare moments before and after the job that pays the bills and when they aren't enjoying time spent with their families.

Sometimes, a project gains a large enough of a user base that the project's author is able to offer his services around the project on a paid basis. Recently, Kevin Dangoor, the creator of TurboGears, announced that he would be offering consulting services around TurboGears, primarily development of TurboGears features, coaching, and training.

I love seeing open source authors take the initiative to help fund their project by finding a way to charge for their time on it. This is a win-win-win situation. It's an obvious win for the project author since they get paid for doing something they love. It's a win for the person or company who is paying for the author's time since the author is an expert on the subject. And it's a win for the community because the primary author is able to devote more time to the project. Kevin, I'm sure you'll get plenty of business. I wish you much success in this endeavor.

ONLamp.com 3/9/06 7:03 PM webmaster@oreillynet.com (Jeremy Jones)"



(Via ONLamp.com.)

Thursday, March 16, 2006

Network Filtering by Operating System

Network Filtering by Operating System
by Avleen Vig
02/16/2006

You manage a heterogeneous network and want to provide different Quality of Service agreements and network restrictions based on the client operating system. With pf and altq, you can now limit the amount of bandwidth available to users of different operating systems, or force outbound web traffic through a transparent filtering proxy. This article describes how to install pf, altq, and Squid on your FreeBSD router and web proxy to achieve these goals.
Mission Objective

In an ideal environment, there would be no need for bandwidth shaping, OS fingerprint-based filtering, or even Quality of Service (QoS). Several factors in the real world require a change of game plan. Bandwidth is not free, and many ISPs charge customers based on bandwidth usage. Worms, viruses, and compromised systems can all lead to higher bandwidth costs. In the wake of the W32.Slammer worm, which saturated the connections of infected networks, many companies saw their monthly connectivity bills skyrocket due to the worm's traffic.

Filtering your connections based on operating system can go partway to helping keep such situations from running away. While I will focus on filtering traffic from Windows systems, this process can equally apply to BSD, Linux, Mac OS, or a host of other operating systems listed in the pf.os file on your system. This may be especially useful to people running older versions of OSes that have not or cannot be patched but still require some network connectivity.

As an extension of transparent filtering, content filtering is also possible, with tools such as squidGuard allowing children and corporate desktops alike to browse in relative safety.
Tools of the Trade

During my research for this article, several people asked me why I chose to use BSD, pf, altq, and Squid for this task. Other tools come close to providing the required functionality, but none offers to fill the requirements as readily as these. Linux and iptables can work with Squid to provide a transparent proxy but cannot filter connections by operating system. Though other proxy servers exist, Squid is one of the best available today.

It is important to note that OS fingerprinting works only on TCP SYN packets, which initiate TCP sessions, and not on currently established connections or UDP sessions. While this will not be a problem for most systems and network administrators, you may want to pay more attention to your UDP filtering rules.
Installing pf and altq

pf and altq provide packet filtering and bandwidth shaping, respectively. Their relationship is not unlike that between IPFIREWALL and DUMMYNET, where the same rules file configures both pf and altq.

While pf is universally usable, altq requires a supported network card. The good news is that most network cards in common use are supported. Look at the Supported Devices section of man 4 altq to find a list of supported network cards.

Once you've confirmed you have a supported device, add pf and altq to your kernel. You will need to recompile your kernel as described in the FreeBSD Handbook. First, add a few options to the end of your kernel configuration file:

device pf
options ALTQ
options ALTQ_CBQ
options ALTQ_RED
options ALTQ_RIO
options ALTQ_HFSC
options ALTQ_CDNR
options ALTQ_PRIQ

Note: If you are installing altq on a multiprocessor system, add options ALTQ_NOPPC to your configuration before you recompile your kernel.

After you have recompiled your kernel and rebooted, test pf to make sure it installed correctly with the command pfctl -s rules. If you see the error pfctl: /dev/pf: No such file or directory, pf did not install correctly. If you see the error No ALTQ support in kernel ALTQ related functions disabled, pf is working but altq is not. In the latter case, you will still be able to force users through a transparent proxy, but you won't be able to limit bandwidth using altq.
Installing Squid with Transparent Filtering Support

Install Squid with the command:

% cd /usr/ports/www/squid && make config install clean

This will present you with a list of options for compiling Squid. To enable transparent proxy support, select SQUID_PF. You can also select or deselect any other option. I often find SQUID_SNMP useful for gathering and graphing statistics using RRDTool. Once Squid is installed, edit /usr/local/etc/squid/squid.conf. Set at least the options:

http_port YOUR_PROXY_IP:3128
http_access deny to_localhost
acl our_networks src YOUR_NETWORK/24
http_access allow our_networks
visible_hostname YOUR_HOSTNAME
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on

Replace YOUR_PROXY_IP with the IP address your proxy server will listen on, YOUR_NETWORK/24 with your internal network address range (for example, 192.168.0.0/24), and YOUR_HOSTNAME with the hostname you want to show to users in error messages. YOUR_HOSTNAME is not required but extremely useful if you have a cluster of proxy servers sharing a common front end such as a load balancer.

While you can get by with changing only these options, you should spend some time going through the remainder of your squid.conf file and tuning it to your needs. Over time, you may need to tune various other options such as cache sizes or connection timeouts. The Squid configuration file is a behemoth; spending an hour now getting familiar with various options may save you time and trouble in the future.

Wednesday, March 15, 2006

10 Best Security Live CD Distros (Pen-Test, Forensics & Recovery) »

10 Best Security Live CD Distros (Pen-Test, Forensics & Recovery) »: "10 Best Security Live CD Distros (Pen-Test, Forensics & Recovery)
Darknet spilled these bits on March 14th 2006 @ 9:17 am
1. BackTrack

The newest contender on the block of course is BackTrack, which we have spoken about previously. An innovative merge between WHax and Auditor (WHax formely WHoppix).

BackTrack is the result of the merging of two Innovative Penetration Testing live Linux distributions Whax and Auditor, combining the best features from both distributions, and paying special attention to small details, this is probably the best version of either distributions to ever come out.

Based on SLAX (Slackware), BackTrack provides user modularity. This means the distribution can be easily customised by the user to include personal scripts, additional tools, customised kernels, etc.

Get BackTrack Here.

2. Operator

Operator is a very fully featured LiveCD totally oriented around network security (with open source tools of course).

Operator is a complete Linux (Debian) distribution that runs from a single bootable CD and runs entirely in RAM. The Operator contains an extensive set of Open Source network security tools that can be used for monitoring and discovering networks. This virtually can turn any PC into a network security pen-testing device without having to install any software. Operator also contains a set of computer forensic and data recovery tools that can be used to assist you in data retrieval on the local system.

Get Operator Here

3. PHLAK

PHLAK or [P]rofessional [H]acker’s [L]inux [A]ssault [K]it is a modular live security Linux distribution (a.k.a LiveCD). PHLAK comes with two light gui’s (fluxbox and XFCE4), many security tools, and a spiral notebook full of security documentation. PHLAK is a derivative of Morphix, created by Alex de Landgraaf.

Mainly based around Penetration Testing, PHLAK is a must have for any pro hacker/pen-tester.

Get PHLAK Here (You can find a PHLAK Mirror Here as the page often seems be down).


4. Auditor

Auditor although now underway merging with WHax is still an excellent choice.

The Auditor security collection is a Live-System based on KNOPPIX. With no installation whatsoever, the analysis platform is started directly from the CD-Rom and is fully accessible within minutes. Independent of the hardware in use, the Auditor security collection offers a standardised working environment, so that the build-up of know-how and remote support is made easier.

Get Auditor Here

5. L.A.S Linux

L.A.S Linux or Local Area Security has been around quite some time aswell, although development has been a bit slow lately it’s still a useful CD to have. It has always aimed to fit on a MiniCD (180MB).

Local Area Security Linux is a ‘Live CD’ distribution with a strong emphasis on security tools and small footprint. We currently have 2 different versions of L.A.S. to fit two specific needs - MAIN and SECSERV. This project is released under the terms of GPL.

Get L.A.S Linux Here

6. Knoppix-STD

Horrible name I know! But it’s not a sexually trasmitted disease, trust me.

STD is a Linux-based Security Tool. Actually, it is a collection of hundreds if not thousands of open source security tools. It’s a Live Linux Distro, which means it runs from a bootable CD in memory without changing the native operating system of the host computer. Its sole purpose in life is to put as many security tools at your disposal with as slick an interface as it can.

Get Knoppix-STD Here


7. Helix

Helix is more on the forensics and incident response side than the networking or pen-testing side. Still a very useful tool to carry.

Helix is a customized distribution of the Knoppix Live Linux CD. Helix is more than just a bootable live CD. You can still boot into a customized Linux environment that includes customized linux kernels, excellent hardware detection and many applications dedicated to Incident Response and Forensics.

Get Helix Here

8. F.I.R.E

A little out of date, but still considered the strongest bootable forensics solution (of the open-source kind). Also has a few pen-testing tools on it.

FIRE is a portable bootable cdrom based distribution with the goal of providing an immediate environment to perform forensic analysis, incident response, data recovery, virus scanning and vulnerability assessment.

Get F.I.R.E Here

9. nUbuntu

nUbuntu or Network Ubuntu is fairly much a newcomer in the LiveCD arena as Ubuntu, on which it is based, is pretty new itself.

The main goal of nUbuntu is to create a distribution which is derived from the Ubuntu distribution, and add packages related to security testing, and remove unneeded packages, such as Gnome, Openoffice.org, and Evolution. nUbuntu is the result of an idea two people had to create a new distribution for the learning experience.

Get nUbuntu Here

10. INSERT Rescue Security Toolkit

A strong all around contender with no particular focus on any area (has network analysis, disaster recovery, antivirus, forensics and so-on).

INSERT is a complete, bootable linux system. It comes with a graphical user interface running the fluxbox window manager while still being sufficiently small to fit on a credit card-sized CD-ROM.

The current version is based on Linux kernel 2.6.12.5 and Knoppix 4.0.2

Get INSERT Here

Extra - Knoppix

Remember this is the innovator and pretty much the basis of all these other distros, so check it out and keep a copy on you at all times!

Not strictly a security distro, but definately the most streamlined and smooth LiveCD distribution. The new version (soon to be released - Knoppix 5) has seamless NTFS writing enabled with libntfs+fuse.

KNOPPIX is a bootable CD or DVD with a collection of GNU/Linux software, automatic hardware detection, and support for many graphics cards, sound cards, SCSI and USB devices and other peripherals. KNOPPIX can be used as a productive Linux desktop, educational CD, rescue system, or adapted and used as a platform for commercial software product demos. It is not necessary to install anything on a hard disk.

Get Knoppix Here

Other Useful Resources:

SecurityDistros
FrozenTech LiveCD List
DistroWatch

Others to consider (Out of date or very new):

SlackPen
ThePacketMaster
Trinux
WarLinux
Network Security Toolkit
BrutalWare
KCPentrix
Plan-B
PENToo"



(Via .)

Tuesday, March 14, 2006

IP COP

"The IPCop project is a GNU/GPL project that offers an exceptional feature packed stand alone firewall to the internet community. Its comprehensive web interface, well documented administration guides, and its involved and helpful user/administrative mailing lists make users of any technical capacity feel at home. It goes far beyond a simple ipchains / netfilter implementation available in most Linux distributions and even the firewall feature sets of commercial competitors.

"Firewalls have had to undergo a tremendous metamorphosis as a result of evolving threats. IPCop is exemplary in offering such a range of default features and even further a large set of optional plug-ins which can provide further functionality..."

Friday, March 10, 2006

Virtualization with FreeBSD Jails

Virtualization with FreeBSD Jails

by Dan Langille
03/09/2006
This article shows how I created a jail under FreeBSD 5. Though FreeBSD 6.0 has come out since I wrote this article, the strategy should remain the same. I'll update the article with any changes should anything be different.


I have written previously about jails on FreeBSD 4. The goal of this jail is the creation of a test environment for a project I've been working on. Until recently, I've been providing a dedicated machine for exclusive use by the Bacula network backup project. That system ran regression tests on FreeBSD. In a recent consolidation of hardware, I replaced several older machines with one newer machine. I wanted to dispose of the computer used by the Bacula project and move them to a more powerful computer. However, I didn't want them to have exclusive use of this system. I wanted to use the same computer and not have us interfere with each other.

Lift and Separate

Jails can separate different processes and keep them apart so they cannot interfere with each other. For example, you could run Apache in a jail and keep it away from everything else on the machine. Should someone find an exploit in Apache and use it to compromise your system, the intruders can only do what the jail allows them to do. A jail can consist of a full operating system, or a single executable.

The solution I used was to create a virtual machine for use by the Bacula project. I had recently acquired a Pentium 4 2.4GHz machine. It was pretty fast, so I decided to use this for system for my own development purposes. It will also be sitting idle for long periods of time, so I might as well let some else use it, as well. I don't want them to have access to the things I'm working on, so I decided to put them in a jail.

From within a jail, they are chrooted and cannot see anything outside of the jail. At the same time, it appears to them as if they are running on their own machine with their own operating system. As far as they know, they have their own computer and nobody else is on the system.

Running a virtual system within a jail is a good solution if you want to provide someone with resources, but don't want them to have complete control over your system. A jail can help you deal with issues of security and access, and improve the usage of existing resources, all at the same time.

Jail Documentation

The main document for creating a jail is man jail. I followed the instructions listed under "Setting up a Jail Directory Tree." I used those instructions to create the jail. You will need the full source tree for the system you going to create. I used the /usr/src/ directory I had from my most recent build world.

There's one step from man jail that I did not follow. I left Sendmail (actually, Postfix) running. I just changed it so that it did not listen on all IP addresses. I added this setting to /usr/local/etc/postfix/main.cf:

inet_interfaces = $myhostname
That allows the jail to run its own mail server.

I put my jail at /home/jail/192.168.0.155.bacula. This is the value I assigned to D in the instructions. After you have installed the jail, if you peek inside that directory, you'll see it looks just like the root directory of a typical FreeBSD system:

[dan@dfc:/home/jail/192.168.0.155.bacula] $ ls
COPYRIGHT etc libexec root usr
bin home mnt sbin var
boot kernel proc sys
dev lib rescue tmp
[dan@dfc:/home/jail/192.168.0.155.bacula] $
Terminology: Host Versus Jail
The host environment is the main system and is where you first install FreeBSD on the computer. It is in the host environment that you create a jail. The Bacula project will do their testing in the jail. They have access to the jail and only the jail. They will not have access to the host environment at all.

This concept of host environment and jail environment will come up again in this article. It is important that you understand what each one is.

In this example, the host environment is at IP address 192.168.0.100 and the jail is at 192.168.0.155.

Modifying Other Daemons

Most daemons will listen to whatever IP addresses are available to them. After starting your jail, if you try to ssh to it, you will not get into it. You'll be in the host environment instead. To get into the jail environment via ssh, you need to:

Tell the host environment sshd not to listen to the jail's IP address.
Run sshd in the jail.
Host Environment syslogd
This entry in /etc/rc.conf tells syslogd to not listen on any IP address.

syslogd_flags="-ss"
That allows syslogd to run in both the host and the jail environments.

Host Environment inetd
This entry in /etc/rc.conf tells inetd to listen on a specific IP address. This address is that of the host environment:

inetd_flags="-wW -C 60 -a 192.168.0.100"
Note that the first part of the flags in that line is from /etc/defaults/rc.conf:

inetd_flags="-wW -C 60" # Optional flags to inetd
Host Environment sshd
To alter the host environment sshd so it listens only to host environment IP addresses, modify /etc/ssh/sshd_config and set the IP address for the Listen directive:

ListenAddress 192.168.0.100
Then restart the main sshd process:

kill -HUP `cat /var/run/sshd.pid`
Use telnet to verify that the host environment is not listening on the jail address:

$ telnet 192.168.0.155 22
Trying 192.168.0.155...
telnet: connect to address 192.168.0.155: Connection refused
telnet: Unable to connect to remote host
If you don't get a connection, the host environment is not listening. This assumes that you have not yet started sshd in the jail environment.

Jail Environment sshd
To start sshd in the jail environment, add the following line to /etc/rc.conf:

sshd_enable="YES"
Jail Environment syslogd
In addition, I also swapped console output to /var/log/messages, as shown in this snippet from /etc/syslogd.conf:

#*.err;kern.warning;auth.notice;mail.crit /dev/console
*.err;kern.warning;auth.notice;mail.crit /var/log/messages

Wednesday, March 08, 2006

OS X security contest ends without incident


Published: 2006-03-08


A new Mac OS X security contest reported on yesterday has ended early, but without incident.

The contest was started on March 6th in response to an article published by CNET News.com and ZDNet of a previous OS X hacking contest. The article initially failed to indicate that contest participants were given local user-level access to the system via SSH - highly unlikely in a real-world setting.

Dave Schroeder at the University of Wisconsin wrote that the test machine, which had traffic spiking to 30 Mbps, received over half a million web requests, 4000 attempted logins via SSH, and had six million events logged in less than 38 hours. Brian Rust, handling media relations for the contest, indicated that the contest was, "not an activity authorized by the UW-Madison," and the university's CIO requested it be ended prematurely. The contest ended without any compromise of the host's security.

Thursday, March 02, 2006

Zero to IPSec in 4 minutes

Zero to IPSec in 4 minutes
Dragos Ruiu 2006-02-28
This short article looks at how to get a fully functional IPSec VPN up and running between two fresh OpenBSD installations in about four minutes flat.

Until recently, setting up an open-source IPSec solution has been woefully complex and involved wading through an alphabet soup of committee-designed protocols. Many people give up on IPSec after their first peek at the horrible and complex software documentation, opting instead to install some sort of commercial SSL VPN which seems much simpler. For those who have been through this exercise, a jumble of SAs, ESPs, AHs, SPIs, CAs, certs, FIFOs, IKEs and policy jargon inside RFCs is enough to give anyone a headache. However, there is good new on the IPSec front: it has all finally been covered up with a nice, simple way to set it up under OpenBSD.
In this short article we'll look at how to get a fully functional IPSec VPN up and running between two fresh OpenBSD machines in about four minutes flat. The goal here certainly isn't to give an exhaustive overview of all the option available in IPSec or OpenBSD, but rather just how quickly and easily we can be up and running when others take days or weeks to do the same thing.

Introducing ipsecctl in OpenBSD

You might not have noticed it, but a new command has sneaked into OpenBSD, starting with version 3.8: ipsecctl. And it's truly wonderful. It provides a much needed layer of abstraction to all the highly flexible, but horribly intricate details of IPSec. In reality, most people don't need half the configuration and protocol options that IPSec provides, so this abstraction layer is sorely needed.
If all one wants to do is set up a simple encrypted Virtual Private Network (VPN) between two sites, the configuration steps one would have to go through otherwise were always truly ugly, and a bottle of aspirin was a mandatory accessory. No more! Now with ipsecctl, a simple VPN can be setup by editing one simple configuration file on OpenBSD: /etc/ipsec.conf.

As a test, my colleague Sean Comeau and I took two freshly installed OpenBSD firewalls, in their default configurations, and edited three files. We changed a total of seven lines of configuration on each system - and had an IPSec VPN exchanging packets between our two sites within four minutes of the first boot.

Those who haven't installed OpenBSD before will find the installation process surprisingly easy. The two most popular ways of installing are via CD-ROM (an inexpensive option, but it must be purchased from the OpenBSD team), or via a simple FTP install using a floppy or CD-ROM boot media. With a broadband connection, a complete FTP install of a default system can easily be completed in under ten minutes. For the purpose of this article, we'll assume you have two fresh installs of OpenBSD ready to go. Note that if you follow the CVS builds of either OpenBSD 3.8-stable or OpenBSD 3.8-current, both machines in your VPN should be running the same snapshot.

An IPSec example

To illustrate just how simple IPSec is to setup in OpenBSD, let's start with an example. First, let's quickly review our goals. We want to network two remote subnets via a fully encrypted, standard IPSec Virtual Private Network (VPN). Both our subnets will have OpenBSD Network Address Translation (NAT) firewalls.
Network A:

External IP address: 1.2.3.4 Internal IP address block: 10.1.1.0/24

Network B:

External IP address: 5.6.7.8 Internal IP address block: 10.2.2.0/24

The configuration of pf, which is our firewall and provides NAT, is found /etc/pf.conf. On both systems in this example, pf.conf should look as follows:


ext_if="fxp0"
int_if="fxp1"
set skip on { lo $int_if }
nat on $ext_if from !($ext_if) -> ($ext_if:0)
block in
pass out keep state
Both systems have had IP forwarding turned on by uncommenting the "net.inet.ip.forwarding=1" line in the /etc/sysctl.conf file. IP forwarding is turned off by default, but is required for NAT. Now that we understand our objectives and have two fully functional base systems, what do we have to do to link our two internal subnets together with a VPN? As you will see, the configuration is surprisingly simple.

Step 1. Configure IPSec

First, add the following lines to Firewall A in /etc/ipsec.conf:

ike esp from 10.1.1.0/24 to 10.2.2.0/24 peer 5.6.7.8
ike esp from 1.2.3.4 to 10.2.2.0/24 peer 5.6.7.8
ike esp from 1.2.3.4 to 5.6.7.8
Next, add the following lines to Firewall B's /etc/ipsec.conf:


ike passive esp from 10.2.2.0/24 to 10.1.1.0/24 peer 1.2.3.4
ike passive esp from 5.6.7.8 to 10.1.1.0/24 peer 1.2.3.4
ike passive esp from 5.6.7.8 to 1.2.3.4
The passive modifier in the configuration denotes that Firewall A will initiate the connection and Firewall B will listen for it.

Step 2. Allow IPSec through the firewall

Now, add the following line to /etc/pf.conf to configure the firewall on Firewall A:

pass quick on $ext_if from 5.6.7.8
and change the "set skip" line from:


set skip on { lo $int_if }
to:


set skip on { lo $int_if enc0 }
This adds the encapsulated enc0 interface to the list.

Now let's move on to Firewall B. In this /etc/pf.conf, add the following lines:


pass quick on $ext_if from 1.2.3.4
set skip on { lo $int_if enc0 }
We're done with both the firewall/NAT and IPSec configiguration, so let's move on to the next step - copying the keys.

Step 3. Copy the isakmpd keys to each system

On Firewall A (1.2.3.4), copy /etc/isakmpd/private/local.pub from Firewall B into /etc/isakmpd/pubkeys//ipv4/5.6.7.8.
Similarly, on Firewall B (5.6.7.8) copy /etc/isakmpd/private/local.pub from Firewall A into /etc/isakmpd/pubkeys/ipv4/1.2.3.4.

The reader should note that while this configuration uses numeric IP addresses, the configuration can also be done with fully qualified domain names. To use domain names, simply copy the keys into the /etc/isakmpd/pubkeys/fqdn directory, and use srcid and dstid keywords in you /etc/ipsec.conf flow specifications.

Step 4. Start the VPN

To start the VPN, use the following commands on both systems:

isakmpd -K
ipsecctl -f /etc/ipsec.conf
Congratualtions! You've just set up an IPSec VPN. You should be pleased to know that the ipsecctl command has automatically configured isakmpd and all its horrible config files, and it has chosen nice, sensible, and secure defaults for you.

The -K option tells isakmpd to skip the intricate and rarely needed policy configurations that would otherwise be required.

Now let's test the VPN. You should be able to ping nodes on 10.2.2.* from nodes on 10.1.1.* and vice versa. If this doesn't work, try starting up isakmpd with the debug option "isakmpd -K -d" to get more diagnostics.

Step 5. Set this up to start automatically at reboot

The default startup daemons on OpenBSD are found in the standard rc.conf file. Edit /etc/rc.conf and change the isakmpd line to:

isakmpd="-K"
Also ensure that 'PF=YES' in rc.conf as well, so that your pf firewall/NAT is started at the next boot. Now we also want to ensure that ipsecctl is started automatically. To the /etc/rc.local file, add the following line:


ipsecctl -f /etc/ipsec.conf
Finally, you may wish to edit your /etc/changelist on both Firewall A and B to ensure that your new /etc/ipsec.conf configuration file is listed. While this step is entirely optional, it ensures that any changes to your IPSec configuration be tracked and emailed to the administrator on a daily basis, as part of the daily mail script. For this to work, you must have configured /etc/mail/aliases and have given the root alias your own email address, and then run 'newaliases' to commit the changes.

And there you have it. Isn't that nice and simple? If you are familiar with pf and pfctl, ipsecctl will seem very easy and provides a very similar interface. In other words, you can get the status of the ipsec flows and associations with:


ipsecctl -sa
And so on. Amazingly, it took more than a decade for someone to finally provide a simple, straightforward configuration interface to IPSec. It's now simple enough that we are finally able to recommend IPSec to novice people so they can easily setup an IPSec VPN.

Conclusion

In this short article we looked at how easy it is to setup an IPSec VPN between two fresh OpenBSD installations. We started with two default installations and changed a total of seven configuration lines. Instead of taking days or weeks to get an IPSec VPN up and running, ours was running in about four minutes.
Thanks Theo and the OpenBSD team for this, as we believe this is truly a huge step forward for users everywhere who want to use IPSec. Ipsecctl is what we have long needed.

As a personal note, I'd like to see other *BSD committers port this to many other systems. Ipsecctl was specified by Matt Sauve-Frankel, and coded by Hans-Joerg Hoexer. While ipsecctl doesn't appear to work fully with IPv6 yet, support for this should be on the way. Also note that there may be differences with how ipsecctl works on the CVS versions of 3.8-current and 3.8-stable, and therefore it is recommended that both firewalls in your IPSec configuration run the same version of OpenBSD.

Monday, February 27, 2006

Don’t Get Stampeded By The 7.1 Parade

As a home theater tech critic, I spend much of my time evangelizing for surround sound. I do it unashamedly and with all my heart. I love surround sound and I want everyone else to get as much pleasure from it as I do. But I worry that a lot of people still waiting to dip a toe in the sound-field are turned off by a bunch of seemingly conflicting numbers: 5.1 and 6.1 and 7.1.

I’ve seen this over and over with people who are just getting into home theater as a hobby. When told they have a choice of 5.1 or 6.1 or 7.1 channels, their eyes glaze over and they mumble something along the lines of: “Um, well, I guess I’ll just keep my two speakers and think about it.” When speaking with newbies, I’ve learned to discuss surround as a 5.1-channel medium, which it essentially is, and leave it at that.

Why bug people with a choice that most would rather not make? The expansion of the 5.1-channel standard was born in the moviehouse, where it’s easier to cover a large space with surround effects if you add a back channel served by speakers in the back of the house.

In film exhibition, 6.1- and 7.1-channel systems make sense. At home, however, 5.1 channels are quite enough. It’s easy to generate a solid soundfield in a small space with three speakers in front and two on the rear of the side walls. To me it’s self-evidently nonsensical to have four surround speakers outnumbering the three in front.

Your family’s attention is riveted on the screen and that’s where a home surround system should deliver most of its firepower. Adding more channels gives your surround receiver more work to do. That’s never a good thing. Despite the “100 watts per channel” specs you see in spec sheets, the majority of surround receivers measure at more like 35.

So when an action-movie soundtrack swells up, it drives the receiver into clipping. This might sound like a slight deflating of dynamics. Or the sound may get harsher as it gets louder. In the worst-case scenario, the receiver overheats and shuts down. If you don’t like what you hear when you turn up the volume, clipping is what you’re hearing.

There are two ways to minimize clipping. One is to dump your receiver for separate components—a multi-channel power amp and a surround preamp-processor. This will cost you more money and make your system bulkier and more complex. The alternative is buy speakers with a high sensitivity rating, measured in decibels (dB), say in the low to mid nineties. Unfortunately they’re not always the best-sounding ones. (Klipsch is one of the rare exceptions.)

Clipping is a fact of life in all except the most lavish home theater systems. But the goal should always be to minimize it. And adding needless surround channels makes it worse. When most folks go out to buy a surround receiver, what’s uppermost in their minds is the price point, not the size of the power supply. The slow, sinking feeling comes later—when they turn up the volume and don’t like what they hear.

At this point I should define a few terms. Feel free to skip this paragraph if you’ve just had a heavy meal. Dolby Digital and DTS are the surround formats used on DVDs; Dolby Digital also plays a role in DTV broadcasting. They originated as 5.1-channel formats. Their expanded cousins are Dolby Digital EX, also known as THX Surround EX, since the two companies co-developed it; and DTS-ES. In Dolby Digital EX, the side-surround channels are discretely encoded, while the back-surround channel (singular, though it may be served by two speakers) is derived from the side-surrounds by a technique called matrixing. Or as I prefer to call it, fakery. DTS-ES comes in two forms, Matrix (with the back-channel information faked) and the all-too-rare Discrete (with the back-channel information encoded in its own discrete channel). If you understood what I just said, you’re a fellow drooler; if you didn’t, you’re probably getting annoyed and losing interest, which is precisely the point I’m trying to make. I’ve limited myself to the barest essentials and just look at the length of this graf. Having to reread it makes me queasy.

If you’re worried about missing out on back-channel information in surround soundtracks, I’d advise you not to fret over it. Most DVD soundtracks are either Dolby Digital 5.1 or DTS 5.1. The high-res music formats, SACD and DVD-Audio, are strictly 5.1-channel affairs with no 6.1 or 7.1 equivalents. If you feed a 7.1-channel receiver with a 5.1-channel signal, it will usually fake something for the back-surrounds using Dolby Pro Logic IIx processing. For my own part, I’d rather listen to five (.1) honest channels and dispense with the sonic smoke and mirrors.

With the marketing of 6.1 and 7.1 surround, the industry has decisively outwitted itself. It has convinced many consumers to buy new receivers and more speakers. But it has also undermined the 5.1-channel standard, which is more appropriate for the home, slowing the acceptance of surround sound in general.

All right people, fess up. How many speakers are you using: five, six, or seven? And those of you who “upgraded” from 5.1, do you really feel your system has started sounding significantly better?

Mark Fleischmann is the audio editor of Home Theater and the author of Practical Home Theater (http://www.quietriverpress.com/).

Oedipus - Web application security analysis

Oedipus is an open source web application security analysis and testing suite written in Ruby. It is capable of parsing different types of log files off-line and identifying security vulnerabilities. Using the analyzed information, Oedipus can dynamically test web sites for application and web server vulnerabilities.

http://oedipus.rubyforge.org/

Thursday, February 23, 2006

New DHCP For Linux?

New DHCP For Linux?
By Sean Michael Kerner

A new DHCP (define) client for Linux is set to take advantage of an expected new feature in a future Linux kernel.

The new DHCP client is being proposed by kernel developer Stefan Rompf and will (when completed) automatically recognize when a Linux user has disconnected from a particular DHCP server and look for a new connection.

But the effort is not without its detractors who feel that a new DHCP client is not necessary for Linux.

DHCP (define) is a cornerstone of Internet connectivity assigning dynamic IP addresses to user connections.

According to Rompf, current DHCP clients on Linux do not recognize temporary disconnections. Such disconnections are common for notebook users that travel between different networks or that roam different hotspots and WLANs.

Rompf argues that the disconnection is not necessarily a limitation of the current 2.6 Linux kernel, as the kernel itself will notify userspace of a disconnection/reconnection event.

However, a feature that is expected to debut in the 2.6.17 Linux kernel will make it even easier to deal with disconnection/reconnection events. The most current Linux kernel release is 2.6.15 with 2.6.16 currently at the release candidate 4 stage.

Rompf said the 2.6.17 kernel will allow userspace to influence connection event signaling, so that a DHCP client could be notified that a connection has terminated and the client should attempt to obtain a new IP address.

The problem, though, is that in order to take advantage of the new feature, you need software that will support it, and that's where Rompf's new DHCP client comes into play.

"The DHCP client is a userspace program to obtain IP configuration when connected to a local network," Rompf told internetnews.com. "It won't be part of the kernel, but I hope for distributions to pick it up.

"There are already DHCP client packages, but they were all missing one feature that is important for my personal work: They do not automatically renew the configuration when I connect to a different network."

Not everyone agrees with Rompf's assessment.

Jean Tourrilhes, HP's Linux Wireless Extension and the Wireless Tools project leader, is known in the Linux community for his wireless Linux efforts.

Tourrilhes noted that Wireless Extension has supported the Wireless Events providing users with precise information about connection status since the 2.4.20 kernel release.

A new DHCP may also come with its own particular shortcomings.

"The traditional DHCP client has a lot of scripting features and API features that are in use, and that will take time to duplicate in the new client if ever they chose to do it," Tourrilhes told internetnews.com. "Personally, I think that fixing the traditional client would have been a better project.

"But, Stefan has the right to have his own opinion and motivation, and this is always progress."

The ISC, the group that is the lead sponsor of ISC DHCP (a popular reference implementation of DHCP), also disagrees with the assessment that a new DHCP client is needed for Linux.

"We don't think it needs to be done again from scratch, and it is something we are interested in including in future releases of DHCP," ISC spokesperson Laura Hendriksen said. "The one change we would like to make as we move forward with this is changing from a polling mode to an event-driven mode."

So far, Rompf's effort is in the alpha stage and is in active development.

"I hope to have it in good shape when Linux kernel 2.6.17 is released, because this kernel will allow interaction between the DHCP client and an 802.1x supplicant, so that authentication runs first, and after the success of the IP setup," Rompf said.

"This will increase usability quite a bit."

A Word to the Wise on WiMax

A Word to the Wise on WiMax: "The approval of a mobile 802.16x standard could open the door to low-cost, wireless broadband -- but not for a few years. Investors might want to take the time to adjust expectations."



(Via Wired News.)

Friday, February 17, 2006

Network Filtering by Operating System

Network Filtering by Operating System

by Avleen Vig
02/16/2006
You manage a heterogeneous network and want to provide different Quality of Service agreements and network restrictions based on the client operating system. With pf and altq, you can now limit the amount of bandwidth available to users of different operating systems, or force outbound web traffic through a transparent filtering proxy. This article describes how to install pf, altq, and Squid on your FreeBSD router and web proxy to achieve these goals.


Mission Objective

In an ideal environment, there would be no need for bandwidth shaping, OS fingerprint-based filtering, or even Quality of Service (QoS). Several factors in the real world require a change of game plan. Bandwidth is not free, and many ISPs charge customers based on bandwidth usage. Worms, viruses, and compromised systems can all lead to higher bandwidth costs. In the wake of the W32.Slammer worm, which saturated the connections of infected networks, many companies saw their monthly connectivity bills skyrocket due to the worm's traffic.

Filtering your connections based on operating system can go partway to helping keep such situations from running away. While I will focus on filtering traffic from Windows systems, this process can equally apply to BSD, Linux, Mac OS, or a host of other operating systems listed in the pf.os file on your system. This may be especially useful to people running older versions of OSes that have not or cannot be patched but still require some network connectivity.

As an extension of transparent filtering, content filtering is also possible, with tools such as squidGuard allowing children and corporate desktops alike to browse in relative safety.

Tools of the Trade

During my research for this article, several people asked me why I chose to use BSD, pf, altq, and Squid for this task. Other tools come close to providing the required functionality, but none offers to fill the requirements as readily as these. Linux and iptables can work with Squid to provide a transparent proxy but cannot filter connections by operating system. Though other proxy servers exist, Squid is one of the best available today.

It is important to note that OS fingerprinting works only on TCP SYN packets, which initiate TCP sessions, and not on currently established connections or UDP sessions. While this will not be a problem for most systems and network administrators, you may want to pay more attention to your UDP filtering rules.

Installing pf and altq

pf and altq provide packet filtering and bandwidth shaping, respectively. Their relationship is not unlike that between IPFIREWALL and DUMMYNET, where the same rules file configures both pf and altq.

While pf is universally usable, altq requires a supported network card. The good news is that most network cards in common use are supported. Look at the Supported Devices section of man 4 altq to find a list of supported network cards.

Once you've confirmed you have a supported device, add pf and altq to your kernel. You will need to recompile your kernel as described in the FreeBSD Handbook. First, add a few options to the end of your kernel configuration file:

device pf
options ALTQ
options ALTQ_CBQ
options ALTQ_RED
options ALTQ_RIO
options ALTQ_HFSC
options ALTQ_CDNR
options ALTQ_PRIQ
Note: If you are installing altq on a multiprocessor system, add options ALTQ_NOPPC to your configuration before you recompile your kernel.

After you have recompiled your kernel and rebooted, test pf to make sure it installed correctly with the command pfctl -s rules. If you see the error pfctl: /dev/pf: No such file or directory, pf did not install correctly. If you see the error No ALTQ support in kernel ALTQ related functions disabled, pf is working but altq is not. In the latter case, you will still be able to force users through a transparent proxy, but you won't be able to limit bandwidth using altq.

Installing Squid with Transparent Filtering Support

Install Squid with the command:

% cd /usr/ports/www/squid && make config install clean
This will present you with a list of options for compiling Squid. To enable transparent proxy support, select SQUID_PF. You can also select or deselect any other option. I often find SQUID_SNMP useful for gathering and graphing statistics using RRDTool. Once Squid is installed, edit /usr/local/etc/squid/squid.conf. Set at least the options:

http_port YOUR_PROXY_IP:3128
http_access deny to_localhost
acl our_networks src YOUR_NETWORK/24
http_access allow our_networks
visible_hostname YOUR_HOSTNAME
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
Replace YOUR_PROXY_IP with the IP address your proxy server will listen on, YOUR_NETWORK/24 with your internal network address range (for example, 192.168.0.0/24), and YOUR_HOSTNAME with the hostname you want to show to users in error messages. YOUR_HOSTNAME is not required but extremely useful if you have a cluster of proxy servers sharing a common front end such as a load balancer.

While you can get by with changing only these options, you should spend some time going through the remainder of your squid.conf file and tuning it to your needs. Over time, you may need to tune various other options such as cache sizes or connection timeouts. The Squid configuration file is a behemoth; spending an hour now getting familiar with various options may save you time and trouble in the future.

Tuesday, February 14, 2006

Powerful Remote X Displays with FreeNX

Powerful Remote X Displays with FreeNX: "tile imageImagine X server technology with compression so tight that GNOME and KDE sessions yield impressive response times when run over modems with SSH encryption. Don't pinch yourself; you're not dreaming! Tom Adelstein explains how FreeNX is the cure-all to many of X11's ills in this excerpt from Running Linux."



(Via Linux DevCenter.)

MSN TV Linux Cluster

MSN TV Linux Cluster: "msn clusteralign="texttop" border="0" height="125" hspace="4" vspace="4" width="425" />



I just saw this MSN TV Linux Cluster over on
Engadget. The boxes have a 733mhz Celeron,
128MB RAM, 2 x USB, Ethernet, and a 64MB CF card for storage. That’s twice the RAM of an Xbox and with a node cost of
$0.99 it makes a much more sensible and compact cluster. The only limit right now seems to be a 64MB capacity cap for
the CF card.





You do need to build a level shifting serial cable to talk to it though. Microsoft included serial pins on the board,
which is convenient. I think that a TTL to RS-232 level shifting box is becoming the second most useful device behind
the bench power supply. You need to do serial level
shifting whether you are talking to an NSLU,
iPod, GP2X, or
WRT54G. You might as well
make the thing USB while you are at it. So, who wants to do
the how-to?




Read'|'Permalink'|'Email this'|'Linking'Blogs'|'Comments
© 2006 Weblogs, Inc.











"



(Via hack a day.)

DEF CON 14 Beta FAQ v0.95 Now Available!

DEF CON 14 Beta FAQ v0.95 Now Available!: "An update to the official FAQ talking about DEF CON and DEF CON 14. Questions and Answers about the new hotel location, costs, events, resources and more. The next update will include a split into two FAQs. One for general DEF CON questions, and one for DEF CON 14."



(Via DEF CON Announcements.)

Sunday, February 05, 2006

Economically complex cyberattacks

Economically complex cyberattacks: "Most people working in cyber security recognize that the interconnections and complexities of our economy can have a huge effect on the destructiveness of cyber attacks. They refer casually to 'network effects,' 'spillover effects' or 'knock-on effects.' Yet there is little understanding of how such effects actually work, what conditions are necessary to create them, or how to quantify their consequences. People working in cyber security also generally acknowledge that combinations of cyber attacks could be much more destructive than individual attacks. Yet there is little understanding of exactly why this is the case or what the principles would be for combining attacks to produce maximum destruction. These two sets of problems are actually the same. It is by taking account of the interconnections and complexities in our economy that cyber-attackers could devise combinations of attacks to cause greater destruction. To understand how this would work, we need to look at three features of our economy that are responsible for much of its structural complexity: redundancies, interdependencies, and near monopolies. Then, as we examine these features, we need to see how each of them would prompt a different sort of attack strategy."



(Via Security & Privacy Magazine, IEEE - new TOC.)

Network security basics

Network security basics: "Writing a basic article on network security is something like writing a brief introduction to flying a commercial airliner. Much must be omitted, and an optimistic goal is to enable the reader to appreciate the skills required. The first question to address is what we mean by 'network security.' Several possible fields of endeavor come to mind within this broad topic, and each is worthy of a lengthy article. To begin, virtually all the security policy issues apply to network as well as general computer security considerations. In fact, viewed from this perspective, network security is a subset of computer security. The art and science of cryptography and its role in providing confidentiality, integrity, and authentication represents another distinct focus even though it's an integral feature of network security policy. The topic also includes design and configuration issues for both network-perimeter and computer system security. The practical networking aspects of security include computer intrusion detection, traffic analysis, and network monitoring. This article focuses on these aspects because they principally entail a networking perspective."



(Via Security & Privacy Magazine, IEEE - new TOC.)

Announce: OpenSSH 4.3 released

Announce: OpenSSH 4.3 released: "OpenSSH 4.3 has just been released. It will be available from the

mirrors listed at http://www.openssh.com/ shortly.



OpenSSH is a 100% complete SSH protocol version 1.3, 1.5 and 2.0

implementation and includes sftp client and server support.



We have also recently completed another Internet SSH usage scan, the

results of which may be found at http://www.openssh.com/usage.html



Once again, we would like to thank the OpenSSH community for their

continued support of the project, especially those who contributed

code or patches, reported bugs, tested snapshots and purchased

T-shirts or posters.


Read more..."



(Via OpenBSD Journal.)

LinuxForum 2006: Several OpenBSD speakers

LinuxForum 2006: Several OpenBSD speakers: "


Thomas Alexander Frederiksen writes:




LinuxForum 2006 is the 9th annual Open Source conference in Copenhagen, Denmark.
It is the largest IT-conference in the Nordic region, and it's very popular due to being a low budget, high quality event. It is a joint venture between three local user groups BSD-DK, DKUUG and SSLUG.




On March 4th, Henning Brauer and Felix Kronlage will be among the many speakers on the technical day of the conference.


Read more..."



(Via OpenBSD Journal.)

Tuesday, January 31, 2006

Shmoocon 2006: Wi-Fi Trickery or How to Secure, Break and Have Fun with Wi-Fi

Shmoocon 2006: Wi-Fi Trickery or How to Secure, Break and Have Fun with Wi-Fi: "Shmoocon 2006: Wi-Fi Trickery or How to Secure, Break and Have Fun with Wi-Fi

Franck Veysset and Laurent Butti, both from France Telecom R&D, presented several proof-of-concept tools at Shmoocon that use 802.11 raw injection. The first is Raw Fake AP. The original Fake AP is a script that generates thousands of fake access points. It is easy to spot because of tell-tale signs like the BSSID showing the AP has only been up for a couple milliseconds. Raw Fake AP tries to generate legitimate access points by modifying BSSIDs and sending beacon frames at coherent time intervals.

Raw Glue AP is designed catch probe requests from clients scanning for a preferred ESSID. It then tries to generate the appropriate probe responses to keep the client occupied.

Raw Covert was the final tool. It creates a covert channel inside of valid ACK frames. ACK frames are usually considered harmless and ignored by wireless IDS. The tool is really basic right now, there is no encryption and it doesn’t handle dropped frames."



(Via hack a day.)

LZMA challenge

LZMA challenge: "Today Steven committed the LZMA port and unlike bzip2 the algorithm doesn't seem to be patented. I have to admit that I was very impressed with the performance of the compressor after Todd showed me his results. The only downside seems to be the time it takes to compress something. Decompression speed is about the same as gzip or bzip2.


The issues at hand here are that this code is GPLd and is written in C++ for added obfuscation. What would make this algorithm very useful is if someone could write an actual free BSD licensed version with the exact same API as libz. In other words create a drop-in replacement for libz.


If you intend to take on this challenge, here are some guidelines:


* BSD licensed
* written per style(9)
* Written in C
* Should have a full regression test
* It should follow the OpenBSD coding and design practices

Happy coding!


Update: I should have done some more research. I was incorrect about bzip2 being patented. I searched the patent offices and only found references to it as being public domain. My bad, sorry for the confusion. Thanks to tedu to bring this to my attention."



(Via OpenBSD Journal.)

Monday, January 30, 2006

Toshiba’s HD DVD Players

Toshiba’s HD DVD Players: "

HD DVDIn March Toshiba will bring their new HD-AX1 and HD-A1 HD DVD players to the American market. They will play HD DVD discs as well as DVD discs, upconverting the latter to 720p or 1080i (over HDMI) if you so wish. Using HD DVD discs, the player can play back native HD in 720p or 1080i, also over the HDMI output.


The new HD DVD players will output copy-protected HD content through the HDMI interface in the native format of the HD DVD disc content of either 720p or 1080i. Through the HDMI interface, standard definition DVDs can be upconverted to output resolution of 720p or 1080i to complement the performance of a HDTV. As the conversion takes place in the player, the signal remains free from excessive digital-to-analog conversion artifacts.


SACD and DVD-Audio seem to be getting nowhere, and now HD DVD offers another hi-res audio option, with the inclusion of DD+ and DTS-HD.


The lossless mandatory formats include Linear PCM and Dolby TrueHD (only 2 Channel support is mandatory). The TrueHD format is bit-for-bit identical to the high resolution studio masters and can support up to eight discrete full range channels of 24-bit/96k Hz audio. Another lossless format (specified as an optional format) is DTS-HD. This employs high sampling rates of up to192k Hz.


Both models feature built-in multi-channel decoders for Dolby Digital, Dolby Digital Plus, Dolby TrueHD (2 channel), DTS and DTS-HD. The HD-XA1 employs the use of four high performance DSP engines to decode the multi-channel streams of the wide array of audio formats. These high performance processors will perform the required conversion process, as well as the extensive on-board Multi-Channel Signal Management including: User Selectable Crossovers, Delay Management and Channel Level Management.


The new HD DVD players can pass digital information to a Surround Sound Processor/Receiver via S/PDIF or HDMI. For Dolby Digital and DTS, the bitstream will be passed through both connections just as in a standard DVD player with the same interfaces. Dolby Digital Plus and DTS-HD content will be converted to a standard bitstream format that is compatible with any processor equipped with decoders of the respective formats and output through S/PDIF and HDMI. Additionally, all the audio formats for either DVD or HD DVD will be decoded to PCM and output via HDMI in either stereo or multi-channel.


The HD-AX1 will retail for an MSRP of $799.99, while the HD-A1 will go for $499.99. Let the HD war begin!


HiddenWires - Toshiba Introduces Line-Up of First HD DVD Players for the U.S. Market


Tags: ,

"



(Via HDBlog.net.)

Sunday, January 22, 2006

RPM Rollback in Fedora Core 4/5

RPM Rollback in Fedora Core 4/5: "

Fedora Core 4/5 uses yum for package management. yum is build on top of rpm, and pirut, pup, and yumex are graphical interfaces built on top of yum. Together, these tools provide a simple-to-use, powerful package management system.

One of the least-known secrets about rpm is that it can rollback (undo) package changes. It can take a fair bit of storage space to track the information necessary for rollback, but since storage is cheap, it's worthwhile enabling this feature on most systems.

Here are cut-to-the-chase directions on using this feature:

To configure yum to save rollback information, add the line tsflags=repackage to /etc/yum.conf.

To configure command-line rpm to do the same thing, add the line %_repackage_all_erasures 1 to /etc/rpm/macros.

Install, erase, and update packages to your heart's content, using pup, pirut, yumex, yum, rpm, and the yum automatic update service.

If/when you want to rollback to a previous state, perform an rpm update with the --rollback option followed by a date/time specification. Some examples: rpm -Uhv --rollback '9:00 am', rpm -Uhv --rollback '4 hours ago', rpm -Uhv --rollback 'december 25'.

(Via .)