From Kate Stecenko on Tue, 14 Apr 1998
I have some problem, can you help me?
Our network has 2 segments. Each segment have a lot of stations Win 95 & Win NT OS. Segments are connected via router. Router is Linux box with Mars NWE for IPX routing & internal kernel IP routing.
I need that all computers from all segments will be visible by each other by NetBIOS (in Network Neibourhood/Microsoft Windows Network). Not all computers in out network have TCP/IP stack (it's impossible by important reasons), so I cannot use NetBIOS over TCP/IP. If there are any way to make my Linux box and Samba work with NetBEUI or run NetBIOS over IPX?
Last I heard NetBEUI is not routable. Novell's IPX/SPX is routable to about 16 hops --- and a properly configured Netware system should automatically route IPX. I don't know about IPX routing through the Linux kernel (it might require some static tweaking).
I don't know of any way to tunnel NetBIOS traffic over IP or IPX.
- I think you can configure Linux to do ethernet bridging (seems that an experimental config option for this has crept into the recent 2.0.x kernels). Bridging is a process where ethernet frames are copied from one interface (segment) to another. This is different from routing in that the router works at a higher level in the OSI reference model (it's at the transport layer while bridging occurs at the network layer and normal ethernet hubs work at the physical layer).
One cost of this is that the bandwidth from one segment is usually no longer isolated from the other (meaning that your utilization may become unacceptable high). Some bridges are more "intelligent" than others --- and they "learn" which ethernet cards are on which segment (by promiscuously watching the MAC --- media access control --- addresses on all ethernet frames on each interface).
The smart switches or bridges then selectively forward frames between the segments. (I use the term frames to refer to ethernet data structures or transmission units and "packets" to discuss those from the upper layers).
Some switching hubs (like the Kalpana) are quite expensive but perform all of this in hardware/firmware. The advantage is that traffic that's local to a segment won't be copied to the other --- which should reduce the overall bandwidth utilization of this approach.
The disadvantages involve NetBIOS and Netware/IPX. NetBIOS is a "chatty" protocol involving lots of broadcasts, particularly by servers (which in '95, NT, and WfW is every machine with any "shares"). IPX is better, for the most part, but most of the servers and services utilized by Netware require SAP's (service advertising packets). These are broadcasts as well.
(SAP's are why you don't have to configure a Netware client system with information about default routers, DNS servers, and things like that. The client listens to the wire for some period of time and hears a list of these periodic SAP's. The disadvantage in large networks with lots of servers, print servers, and other services is that the SAP's can chew up a sizable portion of your bandwidth --- and they are routed).
- Rather than trying to get this to work at a layer below the transport (NetBIOS, TCP/IP, IPX/SPX) you could try to get above it, into the presentation, session or application layers. These approaches are generically called "gateways."
However. I don't know of any gateways that are appropriate to SMB servers.
- The rumors I've been hearing are that Microsoft will be phasing NetBEUI out in favor of TCP/IP. So your organization's constraint may not be feasibly in the long run (the next year or two).
Please tell me what to do.
Question your management's constraint about TCP/IP. NT and '95 both include it (so it can't be a cost issue).
TCP/IP is the most widely used and deployed set of networking protocols in the world --- and has been around longer than anything else in current use. It is clearly scaleable (despite the naysayers and doomsdayers -- "the Death of the Internet" is not imminent). It doesn't suffer from the limitations of IPX and NetBIOS.
I suspect that your management's proscription is based on ignorance. They probably think they know just enough about TCP/IP to worry about security and not enough to know that protocol selection has little to do with system's security. I've seen this discussed several times on the comp.unix.security and BugTraq mailing lists.
If they are concerned about where to get IP addresses it's simply a non-issue. They should read RFC 1918. This RFC establishes several sets of IP addresses to be used by "disconnected" networks. In this case "disconnected" means "behind a firewall" or "not connected to the Internet" (your choice).
You can use any of these that you want --- you don't have to ask anyone's permission. It is your responsibility to prevent any such packets from being routed to the Internet (which is where we get all the discussion of "IP Masquerading" "NAT: network address translation" and "applications proxies" (a form of "gateway").
If their concern is about preventing propagation of "forbidden" protocols (applications layer) or "sensitive" information across the their routers --- there are well established ways of doing that (built right into the Linux kernel, among other places). It's much easier to prevent all propagation than it is to selectively allow access to specific protocols like HTTP (web), SMTP (e-mail) and especially to FTP (which is an ugly protocol for firewall designers to support --- but just as easy as any other to block).
So, I have to question their "important" reasons and suggest that if these reasons are that important and bridging is not feasible than the probably have an unresolvable conflict in their requirements.
(They might consider running polling processes on the Linux Samba/NWE server to replication/mirror all of the data that must be accessible between the segments. This would be a big win in a couple of ways --- if feasible given their usage patterns. It cuts down traffic across the routers (speed/latency benefits for all) and ensures that an extra "backup" of all the relevant data is available. The obvious problems involve concurrency if you allow write access on both sides of the fence. However, if the data is of a type that can be "maintained" on one side and published across in a read-only fashion it is worth a look).
In many ways I'd even question their requirement to share these as files. If you have a few-well managed servers it may be reasonable to make them all "dual homed" (put two ethernet cards in every server and let them all straddle the segments). If they are requiring the propagation of shares created and maintained by desktop users than they probably have a major management problem already.
TIA, Kate Stecenko.
I hope the explanation helps. Just off hand it sounds like you've been saddled with a poorly considered set of constraints and requirements.
It happens to alot of sysadmins and netadmins. While it exercises are creativity and encourages us to socialize (in our mailing lists and newsgroups) --- it also leads to premature graying (or baldness in my case).
Sorry there's no magic bullet for this one.