From Nilesh M. on Thu, 24 Dec 1998
I just have some questions about setting up linux to run as a server for my home computer and to share an internet connection and also to setup as a server for the internet.
O.K. That is three different roles:
- Network Server (which services)
- Internet Gateway (proxy and/or masquerading)
- Internet Host/Server (which services)
It is possible for Linux to concurrently handle all three roles --- though having all of your "eggs in one basket" may be not be a good idea with regards to security and risk assessment.
Traditionally your concerns would also have encompassed the capacity planning --- but a typical modern PC with 200+ Pentium processor, 32Mb to 256Mb of RAM and 4Gb to 12Gb if disk space has quite a bit of capacity compared to the Unix hosts of even 5 to 10 years ago.
Do you know if I can setup a linux box with one 10mbs ethernet for a modem and a 100mbs ethernet for a network in my house? Where do I start and how would I do it.
I presume you're referring to a 10Mbps ethernet for a cable modem, ISDN router (like the Trancell/WebRamp, or the Ascend Pipeline series), or a DSL router. These usually provide 10Mbps interfaces and act as routers to the cable, ISDN or xDSL services to which you're subscribed.
It's certainly possible for you to install two or three ethernet cards into a Linux system. Any decent modern 100Mbps ethernet card will also automatically handle 10Mbps if you plug them into such a LAN. So you'd just put two of these cards into your system, plug one into your router and the other into your highspeed hub.
You often have to add the following line to your /etc/lilo.conf to get kernel to recognize the second ethernet card:
... the 0,0, is a hint to autoprobe for the IRQ and I/O base address for this driver. Alternatively you might have to specify the particulars for your cards with a line like:
... instead. This line must be present in each of the Linux "stanzas" (groups of lines which refer to different Linux kernels with their corresponding root filesystem pointers and other settings).
Of course you must run the /sbin/lilo command to read any changes in your /etc/lilo.conf file and "compile" them into a new set of boot blocks and maps.
If you have a normal modem connected to the system --- it's possible to use that as well. You can use PPP (the pppd program) to establish Internet connection over normal phone lines. There are also internal ISDN, T1 "FRADs" (frame relay access devices) and CSU/DSU (or Codecs --- coder decoder units) that can be installed into your PC and controlled by Linux drivers.
I've seen references to the ipppd to control some sorts of internal ISDN cards. I think most of the others have drivers that make them 'look like' a modem or ethernet driver to Linux.
I just want to buy two 100mbs ethernet cards to hook up to each other... so I don't think I'd need a hub do I? I only want two computers hooked up to this makeshift network.
You either need a hub, or you need a "crossover" ethernet patch cord. A normal cat 5 ethernet patch cord isn't wired correctly to directly connect two ethernet cards.
Any help would be appreciated, especially something like a link to a document which would give me a step by step setup.
I don't have such a link. As you may have realized there are a couple of hundred HOWTO documents on Linux and many of them relate to configuring various services.
Let's go back to our list of different roles:
Network Server (which services) Internet Gateway (proxy and/or masquerading) Internet Host/Server (which services)
Starting at the top. You have a small network that is not normally connected to the Internet (there isn't a permanent dedicated Internet connection). So, you probably want to use "private net" addresses for your own systems. These are IP addresses that are reserved --- they'll never be issued to any host on the Internet (so you won't create any localized routing ambiguities by using them on your systems).
There are three sets of these number:
192.168.*.* 255 Class C nets 172.16.*.* through 172.31.*.* 15 Class B nets 10.*.*.* 1 Class A net
... I use 192.168.42.* for my systems at home.
... These addresses can also be used behind firewalls and Internet gateways. The classic difference between a router and a gateways is that a router just routes package between networks (operating at the "transport" layer of the ISO OSI reference model) while a gateway does translation between protocols (operating at the applications or other upper layers of the reference model).
In the case of Linux we can configure our one Linux system to act as local server and as an Internet gateway. Our gateway can operate through "proxying" (using SOCKS or other applications layer utilities to relay connections between our private network and the rest of the world), or through IP masquerading (using network address translation code built into the kernel to rewrite packets as they are forwarded --- sort of a network layer transparent proxying method).
However, we're getting ahead of ourselves.
First we need to setup our Linux LAN server. So we install Linux and configure its internal ethernet card with an IP address like 192.168.5.1. This should have a route that points to our internal network, something like:
route add -net 192.168.5.0 eth0
... to tell the kernel that all of the 192.168.5.* hosts will be on the eth0 segment.
Now, what services do you want to make accessible to your other systems.
By default a Linux installation makes a common set of services (telnet, NFS, FTP, rsh, rlogin, sendmail/SMTP, web, samba/SMB, POP and IMAP etc) available to any system which can reach you. Most of these are accessible via the "internet service dispatcher" called 'inetd'. The list of these services is in the /etc/inetd.conf file. Some other services, such as mail transport and relaying (sendmail), and web (Apache httpd) are started in "standalone" mode -- that is they are started by /etc/rc.d/*/S* scripts. NFS is a special service which involves several different daemons --- the portmapper and mountd in particular. That's because NFS is an "RPC" based service.
The fact that any system that can route packets to you can request any service your system offers, and the fact that most Unix and Linux systems offer a full suite of services "right out of the box" has classically been a major security problem. Any bug in any service's daemon could result in a full system compromise which could be exploited from anywhere in the world. This is what led to the creation of TCP Wrappers (which is installed in all major Linux distribution by default --- but is configured to be completely permissive by default). It is also why we have "firewalls" and "packet filters."
It's tempting to think that you'll be too obscure for anyone to break into. However, these days there are many crackers and 'script kiddies' who spend an inordinate amount of time "portscanning" --- looking for systems that are vulnerable --- taking them over and using them for further portscanning sniffing, password cracking, spamming, warez distribution and other activities.
I recently had a DSL line installed. So, I'm now connected to the Internet full time. I've had it in for less than a month and there are no DNS records that point to my IP addresses yet. I've already had at least three scans for a common set IMAP bugs and one for a 'mountd' bug. So, I can guarantee you that you aren't too obscure to worry about.
You are also at risk when you use dial-up PPP over ISDN or POTS (plain old telephone service) lines. The probabilities are still reasonably on your side when you do this. However, it's worth configuring your system to prevent these problems.
So, you'll want to edit two files as follows:
... that's the absolute minimum you should consider. This configuration means that the tcpd program (TCP Wrappers) will allow access to "local" systems (those with no "dots" in their host names, relative to your domain), and will deny access to all services by all other parties.
For this to work properly you'll have to make sure that all of your local hosts are given proper entries in your /etc/hosts file and/or that you've properly set up your own DNS servers with forward and reverse zones. You'll also want to make sure that your /etc/host.conf (libc5) and/or /etc/nsswitch.conf (glibc2, aka libc6) are configured to give precedence to your hosts files.
My host.conf file looks like:
# /etc/host.conf order hosts bind multi on
and my /etc/nsswitch.conf looks like:
passwd: db files nis shadow: db files nis group: db files nis hosts: files dns networks: files dns services: db files protocols: db files rpc: db files ethers: db files netmasks: files netgroup: files bootparams: files automount: files aliases: files
glibc2 has hooks to allow extensible lookup for each of these features through modular service libraries. Thus we'll soon be seeing options to put 'LDAP' in this services switch file --- so that hosts, user and group info, etc could be served by an nss_ldap module which would talk to some LDAP server. We could see some user and group information served by "Hesiod" records (over DNS or secure DNS protocols) using some sort of nss_hesiod module. We might even see NDS (Novell/Netware directory services) served via an nss_nds module.
But I'm straying from the point.
Once you've done this, you should be able to provide normal services to your LAN. Precisely how you set up your client system depends on what OS they run and which services you want to access.
For example. If you want to share files over NFS with your Linux or other Unix clients, you'd edit the /etc/exports file on your Linux server to specify which directory trees should be accessible to which client systems.
Here's an exports file from one of my systems:
# / *.starshine.org(ro,insecure,no_root_squash) # / 192.168.5.*(ro,insecure,no_root_squash) /etc/ (noaccess) /root/ (noaccess) /mnt/cdrom 192.168.5.*(insecure,ro,no_root_squash)
... note I've marked two directories as "noaccess" which I use when I'm exporting my root directory to my LAN. I do this to prevent any system in the rest of my network from being able to read my configuration and passwd/shadow files. I only export my root directory in read-only mode, and I only do that occasionally and temporarily (which is why these or commented out at the moment). My CDROM I leave available since I'm just not worried about anyone in the house reading data off of any CD's I have around.
Keep in mind that NFS stands for "no flippin' security" --- anyone in control of any system on your network can pose as any non root user and access any NFS share "as" that user (so far as all filesystem security permissions are concerned. NFS was designed for a time when sites only had a few host systems and all of those were connected and tightly controlled in locked rooms. NFS was never intended for use in modern environments where people can carry a Linux, FreeBSD, or even Solaris x86 system into your office under one arm (installed on a laptop) and connect it to the nearest ethernet jack (now scattered throughout every corner of modern offices --- I've seen them in the reception areas of some sites).
To do filesharing for your Windows boxes you'd configure Samba by editing /etc/smb.conf. To act as a fileserver for your MacOS systems you'd install and configure 'netatalk'. To emulate a Netware fileserver you'd install Mars_nwe, and/or buy a copy of the Netware Server for Linux from Caldera (http://www.caldera.com).
There are ways to configure your system as a printer server for any of these constituencies as well.
Beyond file and print services we move to the "commodity internet services" like FTP, telnet, and HTTP (WWW). There's generally no special configuration necessary for these (if you've installed any of the general purpose Linux distributions).
If you create an FTP account in your /etc/passwd file then anonymous FTP will be allowed to access a limited subdirectory of files. If you rename this account to "noftp" or to "ftpx" or to anything other than "ftp" and/or if you remove the account entirely than you system will not allow anonymous FTP at all. If you allow anonymous FTP you can simply put any file that you want made public into the ~ftp/pub directory --- and make sure that they are readable. By default the FTP services are run through tcpd so they will respect your hosts.allow/hosts.deny settings.
If you're going to set up a "real" FTP site for public mirroring or professional "extranet" applications you'd want to use ncftpd, proftpd, or beroftpd instead of the now aging WU-ftpd or the old BSD FTP daemon (in.ftpd). These alternative FTP daemons have their own configuration files and can support virtual hosting and other features. In some of them you can create "virtual users --- accounts that are only valid for FTP access to specific FTP subtrees and/or virtually hosted services --- accounts that can be used to access any other service on the system.
Web services are controlled with their own configuration files. There are a couple of whole books just on the configuration of Apache servers. By default they let anyone view any web pages that you put into the 'magic' directories (/home/httpd/docs or something like that).
It's possible to limit access to specific directories according the the IP addresses (or reverse DNS names) of the clients. As with TCP Wrappers this should not be considered to be a form "authentication" --- but it can be used to distinguish between "local" and "non-local" systems IF YOU HAVE ANTI-SPOOFING PACKET FILTERS in place (a part of any good firewall).
telnet, rlogin, rsh, and other forms of interactive shell access are generally pretty easy to setup. Like many Unix/Linux services it is harder to disable or to limit access to these services than it is to allow it.
Under Red Hat Linux access to these and other "authenticating" services can be controlled by editing PAM configuration files under /etc/pam.d/
So, the short answer to the question "How do I set up Linux as a server?" is you install it, setup its address and routing, then you install and configure the services that you want to provide.
Now, when we we want to use Linux as a gateway to the Internet (or any other network --- to connect you home network to your office or to a friend's network) you first resolve the addressing and routing issues (set up your second interface and add the appropriate routes). Then you use IP masquerading or proxy services (SOCKS) to allow your systems (using the non-routable "private net" addresses) to access services on the Internet.
To use IP masquerading with the old ipfwadm code (as present in the standard 2.0.x kernels you just issue a command like:
ipfwadm -F -a accept -m -D 0.0.0.0/0 -S 192.168.5.0/24
... which adds (-a) a rule to the forwarding (-F) table to "accept" for "masquerading" (-m) any packets that are "destined for" (-D) anywhere (0.0.0.0/0) and are from source IP addresses (-S) that match the pattern 192.168.5.0/24 (an address mask that specifies the first 24 bits, or three octets as the "network portion" of the address --- and therefore covers that whole class C network).
You should definitely use a modular kernel and almost certainly should have 'kerneld' loaded when you use this masquerading technique. That's because there are several common protocols (especially FTP) which require special handling for masquerading (in the case of FTP there's a data connection that comes back from the server to the client, while the data connection when in the usual direction from the client to the server.
For this reason I actually prefer applications proxying. To use that you go to the "contrib" directory at any Red Hat site and download the SOCKS server and client packages. You install the server on your Linux gateway then you install the clients on any of your Linux clients.
On the SOCKS gateway you create a file: /etc/socks5.conf with something like this for its contents:
route 192.168.5. - eth0 permit - - - - - -
... there are many options that you can use to limit access to the socks gateway --- but this is the simplest working example.
On the Linux clients you create a file named /etc/libsocks5.conf with an entry in it that looks something like:
socks5 - - - - 192.168.5.2 noproxy - 192.168.5. -
... where the ".2" address is the one on which I was running this SOCKS server.
For the non Linux clients you have various different configuration methods. Most Windows TCP/IP utility suites (other than Microsoft's) support SOCKS proxies. There are replacement WINSOCK.DLL's that support this proxying protocol transparently for most/all other Windows services. The MacOS applications also seem to support SOCKS pretty widely.
There are a few alternatives to NEC's SOCKS servers. I've found "DeleGate" to be a pretty good one (search for it on Freshmeat). DeleGate as the advantage that you can use it as a "manually traversed" proxy as well as a "SOCKS" compatible one. The SOCKS proxying protocol allows the client software to communicate with the proxy server to relay information about the request to it, so that it can, in turn, relay that to a process that runs on the external servers. This is called "traversal."
Non-SOCKS proxies have to have some other traversal mechanism. Many of them are "manually traversed" --- I telnet or ftp to the TIS FWTK proxies (for example) and I log in as "email@example.com." --- in other words I encode additional account and destination information into the prompts where I'd normally just put my account name.
DeleGate allows you do use this manual traversal mechanism when you are stuck with a non-SOCKSified client.
I've also seen reference to another SOCKS server package called "Dante" --- that's also listed at Freshmeat (http://www.freshmeat.net).
There are also a few other types of proxies for special services. For example the Apache web server, and the CERN web server and a few others can be used as "caching web proxies." Squid can proxy and cache for web and FTP.
Some services, such as mail and DNS are inherently "proxy" capable by design. I can't adequately cover DNS or e-mail services in this message. There are full-sized books on each of these.
So that's the very basics of using Linux as a gateway between a private LAN and the Internet. If you get a set of "real" IP addresses, and you insist on using these to allow "DRIP" (directly routed IP) into your LAN you don't have to do any of this IP masquerading or proxying --- but you should do some packet filtering to protect your client systems and servers.
Good packet filtering is difficult. I alluded to one of the problem when I pointed out that FTP involves two different connections --- an outgoing control connection and an incoming data connection. There's also a "PASV" or "passive" mode which can help with that --- but it still involves two connections. This wreaks havoc with simple packet filtering plans since we can't just blindly deny "incoming" connection requests (based on the states of the "SYN" and "ACK" flags in the TCP packet headers. One of the "advantages" (or complications) of "stateful inspection" is that it tracks these constituent connections (and the TCP sequencing of all connections) to ensure consistency.
A decent set of packet filters will involve much more code than the set of proxying and masquerading examples I've shown here. I personally don't like DRIP configurations. I think they represent too much risk for typical home and small business networks. However, here's a sample
# Flush the packet filtering tables /root/bin/flushfw # Set default policy to deny /sbin/ipfwadm -I -p deny /sbin/ipfwadm -F -p deny /sbin/ipfwadm -O -p deny # Some anti-martian rules -- and log them ## eth1 is outside interface /sbin/ipfwadm -I -o -W eth1 -a deny -S 192.168.0.0/16 /sbin/ipfwadm -I -o -W eth1 -a deny -S 172.16.0.0/12 /sbin/ipfwadm -I -o -W eth1 -a deny -S 10.0.0.0/8 /sbin/ipfwadm -I -o -W eth1 -a deny -S 127.0.0.0/8 # Some anti-leakage rules -- with logging ## eth1 is outside interface /sbin/ipfwadm -O -o -W eth1 -a deny -S 192.168.0.0/16 /sbin/ipfwadm -O -o -W eth1 -a deny -S 172.16.0.0/12 /sbin/ipfwadm -O -o -W eth1 -a deny -S 10.0.0.0/8 /sbin/ipfwadm -O -o -W eth1 -a deny -S 127.0.0.0/8 ## these are taken from RFC1918 --- plus ## the 127.* which is reserved for loopback interfaces # An anti-spoofing rule -- with logging /sbin/ipfwadm -I -o -W eth1 -a deny -S 220.127.116.11/28 # No talking to our fw machine directly ## (all packets are destined for forwarding to elsewhere) /sbin/ipfwadm -I -o -a deny -D 18.104.22.168/32 /sbin/ipfwadm -I -o -a deny -D 22.214.171.124/32 # An anti-broadcast Rules ## (block broadcasts) /sbin/ipfwadm -F -o -a deny -D 126.96.36.199/32 /sbin/ipfwadm -F -o -a deny -D 188.8.131.52/32 # Allow DNS ## only from the servers listed in my caching server's ## /etc/resolv.conf /sbin/ipfwadm -F -a acc -D 184.108.40.206/32 -P udp -S 220.127.116.11/32 /sbin/ipfwadm -F -a acc -D 18.104.22.168/32 -P udp -S 22.214.171.124/32 /sbin/ipfwadm -F -a acc -D 126.96.36.199/32 -P udp -S 188.8.131.52/32 # anti-reserved ports rules ## block incoming access to all services /sbin/ipfwadm -F -o -a deny -D 184.108.40.206/28 1:1026 -P tcp /sbin/ipfwadm -F -o -a deny -D 220.127.116.11/28 1:1026 -P udp # Diode ## (block incoming SYN/-ACK connection requests) ## breaks FTP /sbin/ipfwadm -F -o -a deny -D 18.104.22.168/28 -y ## /sbin/ipfwadm -F -o -i acc \ ## -S 0.0.0.0/0 20 -D 22.214.171.124/28 1026:65535 -y ## simplistic FTP allow grr! # Allow client side access: ## (allow packets that are part of existing connections) /sbin/ipfwadm -F -o -a acc -D 126.96.36.199/28 -k
There are bugs in that filter set. Reading the comments you'll see where I know of a rule that handles most FTP --- but opens risks any services that run on ports above 1024 --- like X windows (6000+) etc. This would simply require the attacker to have control of their system (be root on their own Linux or other Unix system --- not too tough) and for them to create package that appeared to come from their TCP port 20 (the ftp data port). That's also trivial for anyone with a copy of 'spak' (send packet).
So, I have this rule commented out and I don't show a set of rules to allow localhost systems to connect to a proxy FTP system.
Note that these addresses are bogus. They don't point to anything that I know of.
The only parts of this set of filters that I feel confident about are the parts where I deny access for incoming spoofed packets (the ones that claim to be from my own addresses or from non-routable or "martian" addresses like localhost). I also have rules to prevent my system from "leaking" any stray private net and/or martian packets out into the Internet. This is a courtesy --- and it has the practical benefit that I'm much less likely to "leak" any confidential data that I'm sharing between "private net" system on my LAN --- even if I screw up my routing tables and try to send them out.
I've read a bit about ipfil (Darren Reed's IP Filtering package --- which is the de facto standard on FreeBSD and other BSD systems and which can be compiled and run on Linux. This seems to offer some "stateful" features that might allow one to more safely allow non-passive FTP. However, I don't know the details.
The 2.2 kernels will include revamped kernel packet filtering which will be controlled by the 'ipchains' command. This is also available as a set of unofficial patches to the 2.0 series of kernels. This doesn't seem to offer any "stateful inspection" features but it does have a number of enhancents over the existing ifpwadm controlled tables.
Your last question was about configuring Linux as an Internet server (presumably for public web pages, FTP or other common Internet services.
As you might have gathered by now; that is the same as providing these service to your own LAN. Under Linux (and other forms of Unix) any service default to world-wide availability (which is why we have firewalls).
I've spent some time describing how Linux and other Unix systems need to be specially configured in order to limit access to services to specific networks. Otherwise someone in Brazil can as easily print document on your printer as you.
To be an Internet server all you have to do is have a static IP address (or regularly update your address record at http://www.ml.org). Once people know how to route requests to your server --- assuming you haven't taken steps to block those requests --- Linux will serve them.
Most of the challenges in setting up networks relate to addressing, routing, naming and security. Most of us still use "static" routing for our own networks --- just manually assigning IP addresses when we first deploy our new systems. Most of us with dial-in PPP get dynamic IP addresses from our ISP's. Some sites now use DHCP to provide dynamic addresses to desktop systems (servers still need consistent addresses --- and using DHCP for those just introduced additional opportunities for failure).
For routing, subnetting, and LAN segmentation issues --- read my posting on routing from last month (I think Heather is publishing it this month). That's about 30 pages long!
(The one thing I glossed over in that was "proxyarp" on ethernet. It's covered in another message this month so glance at it if you'd like to learn more.)
I hope I've imparted some hint on the importance of considering your systems security. Even if you have nothing of value on your systems --- if the thought of some cracker vandalizing your files for kicks is of no concern to you --- it is irresponsible to connect a poorly secured system to the Internet (since your compromised system may be used to harass other networks).
I would like to write a faq about this after I'm done... hopefully I can help other after a bit of exprimenting myself.
While the offer is appreciated --- it would be more of a book than an FAQ. However, I would like to see some "Case Studies" --- descriptions of typical SOHO (small office, home office), departmental, and enterprise Linux (and heterogenous) installations.
These would include network maps, "sanitized" examples of the addresses, routing tables and configuration files for all services that are deployed in the network, on all of the clients and servers present. Company, domain and other names, and IP addresses would be "anonymized" to discourage any abuse and minimize any risk represented by exposure. (E-mail addresses of the contributors could be "blind" aliased through my domain or hotmail, or whatever).
The important thing here is to define the precise mixture of services that you intend to provide and the list of users and groups to which you intend to provide them. This is a process that I've harped on before -- requirements analysis.
You need to know who you are serving and what services they need.