I feel like a bash scripting Dr. Frankenstein. Last week, I wrote about the world’s dumbest instant messaging tool. This week I’ve moved on to the world’s dumbest TCP server. In my quest for a deep understanding of networking, it’s a bit of a detour, a roadside America attraction. I’m not so much learning new stuff here as figuring out how to explain what I know. And having fun pushing the limits of bash in the process.
Part of this is just the challenge of the absurd: Could I make this particular bear dance? But the serious point is two-fold: to make the idea of network services more accessible, and to show that bash is more powerful than you might think. People don’t think of bash as a programming language: A bash script usually starts life as a bunch of commands you executed one at a time on the command line, then copied into a file so you didn’t have to re-type them. That’s not real programming, is it? But bash also has variables, data structures, functions, and even process forking. That’s enough to get a fair amount done.
Network servers have the opposite problem. They’re big bits of infrastructure that Other People write. And infrastructure-grade servers – like the Apache web server – arecomplicated. Apache has to implement the full HTTP protocol, not just the small subset of it that most people use. It’s got all this logic for authenticating users, negotiating content types, redirecting to pages that have moved, and so on. On top of all that, it’s got developer-decades worth of edge case handling, performance optimizations, and feature creep.
But the core of what it does is straightforward: It listens on a socket, you connect to it and send it a message in a particular format, it does some processing on that, and it sends you a message in response. That’s fundamentally what all network servers do. The ones we’re familiar with – web, mail, and chat servers – have rich and complex message protocols, but a quick skim through the
So the what’s the simplest server I can write that does something even minimally useful? And can I write it in bash?
I figured netcat is a good place to start. Last week, we used it to send messages back and forth between two people. All we want to do now is replace one of those people with a very small shell script.
Figuring that out took a fair amount of digging through the bash man page, and experimenting to get the syntax right, but in the end it was a trivial amount of code. Let’s take it as read for now that we’ve got a program called
The
Really, that’s the hard bit. Now we just need a program to read stdin and write stdout. In fact, we don’t even need a real program; our wtf_server is actually just a bash function. In its simplest incarnation, it just echoes back what was sent to it:
With the coproc and netcat server running, you can switch to another terminal, open a client connection with netcat, and have an exchange like this:
Ok, so that’s the proof of concept. We’re definitely falling short of the “minimally useful” criteria, but we can replace our echo with any bash commands we want. The only constraint is what it’s safe to do – this is still a toy service, anonymous and going over an unencrypted connection. Don’t run the input as shell commands, fer chrissakes. Within those limits, there’s still plenty of useful things we can do: reporting on system info or serving up static content. Here’s a sketch with a few ideas:
This gives us a limited interactive shell. Each case statement handles a different request format. We can get the machine’s current time and uptime stats. It also has a docs directory; we can list the files in it and cat them out individually. A session looks like this:
(After connecting, I just hit return to send a blank line, and the server responded with the help text and the ‘>’ prompt. Every server response ends with a prompt.)
That’s it. No real protocol, certainly nothing formal like HTTP, just a set of ad-hoc request handlers, made up as we went along. The beauty of this is that it doesn’t depend on anything else. It’s not running behind Apache or anything. There’s no development environment to set up, no gems to install; just one standard unix utility – netcat – and bash handles all the rest.
Here’s the full
Network servers have the opposite problem. They’re big bits of infrastructure that Other People write. And infrastructure-grade servers – like the Apache web server – arecomplicated. Apache has to implement the full HTTP protocol, not just the small subset of it that most people use. It’s got all this logic for authenticating users, negotiating content types, redirecting to pages that have moved, and so on. On top of all that, it’s got developer-decades worth of edge case handling, performance optimizations, and feature creep.
But the core of what it does is straightforward: It listens on a socket, you connect to it and send it a message in a particular format, it does some processing on that, and it sends you a message in response. That’s fundamentally what all network servers do. The ones we’re familiar with – web, mail, and chat servers – have rich and complex message protocols, but a quick skim through the
/etc/services
file turns up sedimentary layers of oddly specific services, like ntp and biff. They do small, specific, useful things.So the what’s the simplest server I can write that does something even minimally useful? And can I write it in bash?
I figured netcat is a good place to start. Last week, we used it to send messages back and forth between two people. All we want to do now is replace one of those people with a very small shell script.
netcat -l port
starts up a server that listens on a port and dumps anything it gets to standard output (stdout). It also sends anything it gets on standard input (stdin) back to the client. We just need to redirect netcat’s stdout to a program, and then redirect that program’s output to netcat’s stdin. Doing either of those alone would be trivial; doing both is tricky.Figuring that out took a fair amount of digging through the bash man page, and experimenting to get the syntax right, but in the end it was a trivial amount of code. Let’s take it as read for now that we’ve got a program called
wtf_server
, which reads from stdin and writes to stdout. What we’re going to do is use bash’s built-in coproc
command, which will start it up as a background process, and set up new file handles for its stdin and stdout.WTF
tells coproc to create an array named WTF
and save the file handles in it.${WTF[0]}
will be wtf_server’s stdout, ${WTF[1]}
will be its stdin. So now we can start up the netcat server with its stdout and stdin jumper-cabled to wtf_server as desired.That’s it. No real protocol, certainly nothing formal like HTTP, just a set of ad-hoc request handlers, made up as we went along. The beauty of this is that it doesn’t depend on anything else. It’s not running behind Apache or anything. There’s no development environment to set up, no gems to install; just one standard unix utility – netcat – and bash handles all the rest.
Here’s the full
wtf_server.sh
script that starts this up.Unpacking Packets
April 27th, 2013So, I sort of understand this whole TCP thing: You open a connection, you send packets, you close the connection. TCP provides a reliable delivery protocol layered on top of the unreliable IP protocol. So your data gets wrapped in a TCP segment, which gets wrapped in an IP datagram.
But what does that actually look like?
Web requests, email, and all of that add another layer of protocol overhead on top of TCP, so let’s start out with something really simple: the world’s dumbest instant messaging service. We’re going to use netcat, the Swiss army knife of TCP/IP utilities. All we’re going to do is have one netcat process (the server) listen on a TCP port, and have another netcat process (the client) open a connection to it. Both will send any messages typed on the command line, and print any messages they get. We start up the server like so:
That’s just telling netcat to start up and listen on port 43981. Why 43981? We’ll get to that in a bit.
Then we switch to another terminal, and start up the client like so:
Here, we need to tell it which server to connect to, and give it the same port number. Then we type stuff into the client:
Each time we hit return, the line shows up in the server:
A key thing about TCP is that it’s a two-way connection. Part of what the client does when it opens the connection is tell the server how to send messages back to it. So here we can also type something into the server:
And it will show up in the client:
When we get bored, we
“-i lo” tells it to listen on the loopback interface, since our machine is just sending messages to itself, and “-X” will dump out the TCP segments in a couple of useful formats. “port 43981″ tells it to only report traffic to and from our netcat server port.
We don’t see anything when we start up our netcat server, but as soon as we start up the client, we get this in the tcpdump terminal:
What we see here is the client and server negotiating a TCP connection in what’s known as a three-way handshake. Our client sends a packet saying that it wants to start a connection, the server sends back an acknowledgement, and the client responds with a confirmation. For each of these we get a summary line describing the packet and then a dump of the actual contents – verbatim, byte-by-byte. The “0×0000″ and such on the left are the byte index in hexadecimal for the start of each row; so zero, 10 (16 in decimal), 20 (32), 30 (48). The big chunk in the center is the data in hex characters. Each hex character is 4 bits (half a byte, and thus referred to as a “nibble” – ah, nerd humor), so each set of 4 is two bytes. The block on the right is the same data, rendered as ASCII characters (with all the non-printing characters shown as periods). Since what we’re dealing with here is all binary data, that’s not useful yet.
So what is all this crap?
The first thing we see is the “4″ telling us that this is an IPv4 datagram (not IPv6). Then a “5″ for the header length. That’s in 32-bit words, so each of those will be two blocks of hex characters. So we already know that the IP header part of this packet is just:
The next byte is the DSCP and ECN fields. They’re all zeros, so we can ignore them here, but essentially they tell routers how important or urgent this packet is. In principle, all packets are the same, but in practice we might want some packets – like for Voice Over IP – to have a higher priority if there’s a lot of traffic. This one byte opens a rabbit hole of technical and policy issues.
The next two bytes – 003c or 60 in decimal – tell us the total length of the packet. Sure enough, the packet ends after 12 bytes of the “0×0030″ row. Two bytes here means that the total length of the packet can’t be more than 65535 bytes (2^16 – 1).
IP doesn’t guarantee that packets will arrive in the order they were sent, so each packet needs an Identification field so the server can put the data back together the right way. The starting index is arbitrary – d2b2 for this one – but you’ll see that it’s incremented normally for later packets. Why not just start with 0001 or 0000? I suspect there’s another rabbit hole there.
Each packet incurs a certain amount of overhead in transmission and processing, so it makes sense to put as much data in each packet as possible. But while the IPv4 protocol sets a maximum of 65535 bytes, it doesn’t require that every router support that. Remember that the protocol was developed back when 64K bytes was more than most machines had, and even now, that’s a lot for one message on a router that’s handling large volumes of traffic.
So our next two bytes – 4000 – deal with packet fragmentation. When a router has to forward a packet that’s bigger than the next router can deal with, it will break it into fragments. The two bytes are divided up a little oddly: 3 bits for flags and 13 bits for the Fragment Offset – its index number. That means that the flags are the 8, 4, and 2 bits of the first nibble. It’s 4, so that’s the Don’t Fragment bit. Even if we were fragmented, this is the first packet, so the Fragment Offset is zero.
Next is TTL – Time To Live. When I bring up this page in a browser on my laptop, it sends packets skipping across the network to my hosting provider in California. They’ll pass through a few routers at my ISP and several more at internet backbone providers across the country before they get to the hosting server. There isn’t a pre-ordained route that they’ll follow. Each router looks at each packet and tries to figure out where to send it to get it closer to its destination. This is what makes the internet robust: If one of those connections goes down, the router will figure out the next best way to get the message through. (And yes, the mechanics of how that works are more than other whole essay.)
The downside of this is that if one or more routers are mis-configured, they could send the packets back to a previous router, and they’d end up going in loops. To keep packets from circling endlessly, they have a limited lifespan, measured in “hops”. Each router along the way decrements the TTL field. If the packet hasn’t got where it’s going by the time it gets to zero, the router knows something’s gone wrong, and drops it. Our packet starts off with a TTL byte of 40, so it’s got 64 hops to live.
It may not look like it, but we’re almost down to the end of the IP header here. The next byte tells us the IP Protocol number for the contents of this IP datagram. 06 means it’s TCP.
The next two bytes – 6a07 – are the header checksum. It’s a number calculated from all the bytes in the header. It’s a way to check that the header wasn’t garbled in transit. When a router gets a packet, it calculates a checksum based on the header it received; if any bits got randomly flipped, the checksums won’t match. (This doesn’t protect against intentional tampering because someone could also update the checksum.)
The last two fields are the source and destination IP addresses. Again, this is a two-way connection we’re setting up here, so the client needs to tell the server where to send packets back to. Since we’re just talking to ourselves over the loopback interface, they’re both 127.0.0.1 – 7f00 0001 in hex.
The first two fields – two bytes each – are the source and destination port numbers: e7dc and abcd. That’s why I picked the weird port to run this on: 43981 in hex is abcd, so it’s easy to spot in the output. e7dc is 59356, which isn’t significant – it’s just what was automatically chosen when the client opened the connection. Perhaps the most significant thing about the ports is that they’re not part of the IP header. Ports are a TCP-level concept; the IP layer only cares about getting the packets to the right machine.
The next four bytes – 22f4 9ff4 – are the Sequence Number (586,457,076). As with the Fragment Offset in the IP layer, this is to keep track of what order the segments belong in and which have been received. The big difference is that here it’s the index of the starting byte in the segment, so it will increase from segment to segment by the number of bytes in the TCP data. It also starts at an arbitrary value, and loops around to zero when it hits the maximum Sequence Number (4 Gigabytes). More on this later.
The next number is the Acknowledgement Number. It’s essentially the Sequence Number for the data received. It’s zero for now, so we’ll talk about it later when it’s got something to say for itself.
The next nibble is the Data Offset, which is the TCP header length in 32-bit words. It’s “a” (10), for a total of 40 bytes, which matches what we can see.
The rest of the a002 block are unused bits (reserved for future use) and flags. They’re all zero except the 2 bit, which is the SYN flag (for synchronize), which means that this segment is the start of a connection.
The next two bytes – 8018 (32792 in decimal) – are the Window Size. This is the sender putting a cap on how much data can be sent back to it, in case it has limited resources. I don’t know the reason for that exact number, but there’s a surprise here: We’ll see in a minute that there’s an optional field that multiplies this value.
Next is the TCP checksum. Unlike the IP checksum, this one is summed across both the TCP header and data. Why doesn’t the IP checksum just do both? I’m not sure, but I’d guess there’s both a design principle and a practical reason. TCP shouldn’t really depend on IP for that. Even though they were designed to work together, they have separate responsibilities. In theory, you could run TCP on top of other protocols than IP, though I don’t know of anyone doing that. So if TCP has to calculate its own checksum, there’s no point making IP do it as well. The practical concern is that the TCP data can be huge compared to the 40 bytes of IP header, and the IP checksum has to be checked at every hop; the TCP checksum is only checked when it reaches its destination.
The last standard field is the Urgent Pointer. The URG flag wasn’t set, so this is 0000. As to when that flag is set and how the urgent pointer is used when it is, that’s probably yet another rabbit hole.
Beyond that, we have a number of optional fields. They’re odd in that they’re not in a specific order, they’re different sizes, and they may have multiple sub-fields. The first byte of each tells us what type of field it is. I’ll run through them quickly, putting pipes between the sub-fields so you can see how they’re broken up.
Now that we know what we’re looking at, it’s pretty easy to read: time; host.port for source and destination; SYN flag; Sequence Number; window size; options with max segment, selective ack, timestamp, no-op, and window scale; and data length of zero.
Awesome, done! That’s the first packet.
So, packet two is the response from our server.
What’s different? The IP identification is 0000, which strikes me as odd. Don’t know what’s going on there. And the checksum is different because of that. If we were connecting two different machines, we’d have seen the source and destination addresses switch. In the TCP header, the ports switched, which is the tip-off that this packet is going from the server to the client. We have a new Sequence Number, since the client and server keep separate counts of the data bytes they send. We now have an Acknowledgement Number, which is the client’s Sequence Number from the last packet, plus one. Both the SYN and ACK flags are set, marking this as the server acknowledgement. There’s a slightly different Window Size, but with the same scaling factor. The previous timestamp is set. And of course a different TCP checksum because of all that.
The third packet is the client’s confirmation of the connection. The server knows that the client asked for a connection, and the server knows that it sent an acknowledgement, but it needs to know that the client got the acknowledgment.
The IP identity has been incremented, which changes the checksum. The TCP Sequence Number has been incremented and matches Acknowledgement Number from the previous server packet. Likewise, the Acknowledgement Number is now the Sequence Number from the server packet, incremented. We have fewer options – just the timestamps and a couple of no-ops – so the Header Size is only 8. Only the ACK flag is set, which says that the connection is solid now. The Window Size is 257, with no scaling factor in the options. I don’t think this is actually used now that the connection is established, so I don’t know why it’s not zero. Something else to research.
Anyway, hey, TCP connection established! So this is the first thing we’d see whether we’re sending email, hitting a web page, or whatever.
Here’s where the ASCII output finally becomes useful. You can spot the “hello world!” content right away, which makes it a lot easier to keep track of which packets are which as we’re digging through this.
In the first packet, the Total Length is bigger by 13 (“hello world!” plus the return character). The client set the PSH flag, to indicate that there’s data to push to the application (netcat). The Acknowledgement and Sequence numbers are the same as last time because no data was sent; but then in the server’s response, its Acknowledgement Number is 13 (no coincidence) more than the client’s Sequence Number.
Ok, so now that the Acknowledgement number is starting to move for real, let’s talk about what all the futzing around with it and the Sequence Number is about. This is really the core of TCP, what makes it special. This is how it guarantees that the data gets through even when IP delivery fails and packets get dropped. To do that, the client needs to keep track of each chunk of data it sends out, and it needs to get a response from the server saying that piece has been received. It’s like registered mail but better, because the response tells the client not only that the server got a packet, but how much data it got and where it is in the client’s data set. (The Acknowledgement Number is actually the number of the next byte the server expects to get from the client).
TCP isn’t normally a one-for-one exchange like this. Often, the client would send out a whole mess of packets at once. Rather than acknowledging each individually, which would generate a whole lot of traffic, the server just sends back the Acknowledgement Number for the highest packet received, assuming it gets them all. If it doesn’t, if there are packets missing, it could send an acknowledgement for the highest contiguous packet it gets, and have the client re-send everything later. But it could be smarter than that, and this is where the Selective Acknowledgement option comes in. That lets the server acknowledge several discontinuous blocks (as start and end bytes), so the client only has to re-send the missing pieces.
Also remember that this is a two-way conversation. When the server is sending an acknowledgement to the client, it’s also sending its own Sequence Number, so the client can keep track of what it’s received from the server. In this case, all the content is in the outbound message, and what’s coming back is an empty acknowledgement. But in an HTTP request, we’d see content in the outbound message – request headers, the type of request (GET, POST, etc.), the path to the web page we’re requesting, and any form data – and the response would have the HTML content of the web page.
We’ve sent a couple more messages back and forth (“how’s it going?”, “pretty good!”), so the Sequence and Acknowledgement numbers have jumped ahead a bit, as have the timestamps.
The real action here is in the TCP flags, the 2nd byte in the 0×0020 row. The ACK bit is still set, but now the FIN bit is too. That’s the client telling the server to close the connection. The server sends back a response with the FIN bit set, and the client sends a simple acknowledgement. It’s the same send-acknowledge-confirm exchange that we saw in the opening handshake.
If you haven’t actually run through this little netcat/tcpdump exercise on your own terminal, give it a try. You don’t need to pick through it byte-by-byte like I have. Just take a couple minutes to watch the packets go back and forth, and skim the summary lines. That gave me a sort of visceral sense of what’s going on.
If you do want to dig into this more, try pointing tcpdump at a real service. Set up a minimal web page on your local web server, point tcpdump at port 80, and hit the page with your browser. I just tried that myself, and I think it’s going to keep me busy for a while.
But what does that actually look like?
Web requests, email, and all of that add another layer of protocol overhead on top of TCP, so let’s start out with something really simple: the world’s dumbest instant messaging service. We’re going to use netcat, the Swiss army knife of TCP/IP utilities. All we’re going to do is have one netcat process (the server) listen on a TCP port, and have another netcat process (the client) open a connection to it. Both will send any messages typed on the command line, and print any messages they get. We start up the server like so:
Then we switch to another terminal, and start up the client like so:
ctrl-c
to quit either the server or the client, and the other shuts down automatically.Pop the Hood
Ok, so that’s it. Messages going across a TCP connection a line at a time. Totally bare-bones. So what’s going on under the hood? To answer that, we’re going to re-run this little exercise, and this time we’re going to use tcpdump to listen in on the conversation. As the name implies, tcpdump listens in on TCP traffic and dumps it out to the screen. So, open a third terminal and fire up tcpdump:We don’t see anything when we start up our netcat server, but as soon as we start up the client, we get this in the tcpdump terminal:
So what is all this crap?
IP Header
Well, like I said, we’ve got TCP segments wrapped in IP datagrams, so the IP data is going to be what we see first. As always, Wikipedia is an awesome resource, and its page on IPv4 lays out the datagram structure for us bit-by-bit. (You may want to open that up in another tab for reference while you’re reading this.) Let’s look at our dump of the first packet:The next two bytes – 003c or 60 in decimal – tell us the total length of the packet. Sure enough, the packet ends after 12 bytes of the “0×0030″ row. Two bytes here means that the total length of the packet can’t be more than 65535 bytes (2^16 – 1).
IP doesn’t guarantee that packets will arrive in the order they were sent, so each packet needs an Identification field so the server can put the data back together the right way. The starting index is arbitrary – d2b2 for this one – but you’ll see that it’s incremented normally for later packets. Why not just start with 0001 or 0000? I suspect there’s another rabbit hole there.
Each packet incurs a certain amount of overhead in transmission and processing, so it makes sense to put as much data in each packet as possible. But while the IPv4 protocol sets a maximum of 65535 bytes, it doesn’t require that every router support that. Remember that the protocol was developed back when 64K bytes was more than most machines had, and even now, that’s a lot for one message on a router that’s handling large volumes of traffic.
So our next two bytes – 4000 – deal with packet fragmentation. When a router has to forward a packet that’s bigger than the next router can deal with, it will break it into fragments. The two bytes are divided up a little oddly: 3 bits for flags and 13 bits for the Fragment Offset – its index number. That means that the flags are the 8, 4, and 2 bits of the first nibble. It’s 4, so that’s the Don’t Fragment bit. Even if we were fragmented, this is the first packet, so the Fragment Offset is zero.
Next is TTL – Time To Live. When I bring up this page in a browser on my laptop, it sends packets skipping across the network to my hosting provider in California. They’ll pass through a few routers at my ISP and several more at internet backbone providers across the country before they get to the hosting server. There isn’t a pre-ordained route that they’ll follow. Each router looks at each packet and tries to figure out where to send it to get it closer to its destination. This is what makes the internet robust: If one of those connections goes down, the router will figure out the next best way to get the message through. (And yes, the mechanics of how that works are more than other whole essay.)
The downside of this is that if one or more routers are mis-configured, they could send the packets back to a previous router, and they’d end up going in loops. To keep packets from circling endlessly, they have a limited lifespan, measured in “hops”. Each router along the way decrements the TTL field. If the packet hasn’t got where it’s going by the time it gets to zero, the router knows something’s gone wrong, and drops it. Our packet starts off with a TTL byte of 40, so it’s got 64 hops to live.
It may not look like it, but we’re almost down to the end of the IP header here. The next byte tells us the IP Protocol number for the contents of this IP datagram. 06 means it’s TCP.
The next two bytes – 6a07 – are the header checksum. It’s a number calculated from all the bytes in the header. It’s a way to check that the header wasn’t garbled in transit. When a router gets a packet, it calculates a checksum based on the header it received; if any bits got randomly flipped, the checksums won’t match. (This doesn’t protect against intentional tampering because someone could also update the checksum.)
The last two fields are the source and destination IP addresses. Again, this is a two-way connection we’re setting up here, so the client needs to tell the server where to send packets back to. Since we’re just talking to ourselves over the loopback interface, they’re both 127.0.0.1 – 7f00 0001 in hex.
TCP Header
Ok, that’s the IP header. Now on to the TCP header. Let’s strip the IP header out of our packet and see what’s left.The next four bytes – 22f4 9ff4 – are the Sequence Number (586,457,076). As with the Fragment Offset in the IP layer, this is to keep track of what order the segments belong in and which have been received. The big difference is that here it’s the index of the starting byte in the segment, so it will increase from segment to segment by the number of bytes in the TCP data. It also starts at an arbitrary value, and loops around to zero when it hits the maximum Sequence Number (4 Gigabytes). More on this later.
The next number is the Acknowledgement Number. It’s essentially the Sequence Number for the data received. It’s zero for now, so we’ll talk about it later when it’s got something to say for itself.
The next nibble is the Data Offset, which is the TCP header length in 32-bit words. It’s “a” (10), for a total of 40 bytes, which matches what we can see.
The rest of the a002 block are unused bits (reserved for future use) and flags. They’re all zero except the 2 bit, which is the SYN flag (for synchronize), which means that this segment is the start of a connection.
The next two bytes – 8018 (32792 in decimal) – are the Window Size. This is the sender putting a cap on how much data can be sent back to it, in case it has limited resources. I don’t know the reason for that exact number, but there’s a surprise here: We’ll see in a minute that there’s an optional field that multiplies this value.
Next is the TCP checksum. Unlike the IP checksum, this one is summed across both the TCP header and data. Why doesn’t the IP checksum just do both? I’m not sure, but I’d guess there’s both a design principle and a practical reason. TCP shouldn’t really depend on IP for that. Even though they were designed to work together, they have separate responsibilities. In theory, you could run TCP on top of other protocols than IP, though I don’t know of anyone doing that. So if TCP has to calculate its own checksum, there’s no point making IP do it as well. The practical concern is that the TCP data can be huge compared to the 40 bytes of IP header, and the IP checksum has to be checked at every hop; the TCP checksum is only checked when it reaches its destination.
The last standard field is the Urgent Pointer. The URG flag wasn’t set, so this is 0000. As to when that flag is set and how the urgent pointer is used when it is, that’s probably yet another rabbit hole.
Beyond that, we have a number of optional fields. They’re odd in that they’re not in a specific order, they’re different sizes, and they may have multiple sub-fields. The first byte of each tells us what type of field it is. I’ll run through them quickly, putting pipes between the sub-fields so you can see how they’re broken up.
- 02|04|400c: Maximum segment size = 400c (16,396 bytes). This is used by the TCP layer to limit the segment size and save it from getting fragmented at the IP layer.
- 04|02: Selective acknowledgement permitted. Allows the receiver to request re-transmission of only missing segments, rather than the whole message. More on this later.
- 08|0a|01fd b1be|0000 0000: Timestamp=01fd b1be (33403326), previous timestamp=0000 0000. Used to help determine the order of the TCP segments when the amount of data being sent is more than the maximum Sequence Number.
- 01: no operation – padding to align options on word boundaries for performance.
- 03|03|07: window scale = 7; multiplies Window Size by 2^7, bringing it to 4,197,376 bytes
Awesome, done! That’s the first packet.
The Rest of the Handshake
Now that we have the structure down, we just need to look at what’s different about the rest of the packets, and we can get most of what we need from the summary lines.So, packet two is the response from our server.
The third packet is the client’s confirmation of the connection. The server knows that the client asked for a connection, and the server knows that it sent an acknowledgement, but it needs to know that the client got the acknowledgment.
Anyway, hey, TCP connection established! So this is the first thing we’d see whether we’re sending email, hitting a web page, or whatever.
Getting Down to Work
After that, we send a message from the client to the server, and get a response back. This time, we’re actually sending data!In the first packet, the Total Length is bigger by 13 (“hello world!” plus the return character). The client set the PSH flag, to indicate that there’s data to push to the application (netcat). The Acknowledgement and Sequence numbers are the same as last time because no data was sent; but then in the server’s response, its Acknowledgement Number is 13 (no coincidence) more than the client’s Sequence Number.
Ok, so now that the Acknowledgement number is starting to move for real, let’s talk about what all the futzing around with it and the Sequence Number is about. This is really the core of TCP, what makes it special. This is how it guarantees that the data gets through even when IP delivery fails and packets get dropped. To do that, the client needs to keep track of each chunk of data it sends out, and it needs to get a response from the server saying that piece has been received. It’s like registered mail but better, because the response tells the client not only that the server got a packet, but how much data it got and where it is in the client’s data set. (The Acknowledgement Number is actually the number of the next byte the server expects to get from the client).
TCP isn’t normally a one-for-one exchange like this. Often, the client would send out a whole mess of packets at once. Rather than acknowledging each individually, which would generate a whole lot of traffic, the server just sends back the Acknowledgement Number for the highest packet received, assuming it gets them all. If it doesn’t, if there are packets missing, it could send an acknowledgement for the highest contiguous packet it gets, and have the client re-send everything later. But it could be smarter than that, and this is where the Selective Acknowledgement option comes in. That lets the server acknowledge several discontinuous blocks (as start and end bytes), so the client only has to re-send the missing pieces.
Also remember that this is a two-way conversation. When the server is sending an acknowledgement to the client, it’s also sending its own Sequence Number, so the client can keep track of what it’s received from the server. In this case, all the content is in the outbound message, and what’s coming back is an empty acknowledgement. But in an HTTP request, we’d see content in the outbound message – request headers, the type of request (GET, POST, etc.), the path to the web page we’re requesting, and any form data – and the response would have the HTML content of the web page.
Shutting Down
The one thing left to show you is what happens when we close the connection. If you hitctrl-c
in either terminal, you’ll see both the client and server exit immediately, but you’ll also see a bunch of traffic in tcpdump.The real action here is in the TCP flags, the 2nd byte in the 0×0020 row. The ACK bit is still set, but now the FIN bit is too. That’s the client telling the server to close the connection. The server sends back a response with the FIN bit set, and the client sends a simple acknowledgement. It’s the same send-acknowledge-confirm exchange that we saw in the opening handshake.
Wrapping Up, Moving On
Ok, so that’s been a lot of to absorb, but what I hope you’ve gotten out of this is that all these internet protocol details are interestingly complex, but totally comprehensible. You’ve got the tools to look under the hood, and with a bit of patience you can figure out what all the parts are doing.If you haven’t actually run through this little netcat/tcpdump exercise on your own terminal, give it a try. You don’t need to pick through it byte-by-byte like I have. Just take a couple minutes to watch the packets go back and forth, and skim the summary lines. That gave me a sort of visceral sense of what’s going on.
If you do want to dig into this more, try pointing tcpdump at a real service. Set up a minimal web page on your local web server, point tcpdump at port 80, and hit the page with your browser. I just tried that myself, and I think it’s going to keep me busy for a while.
New Line of Inquiry
April 21st, 2013I’m starting on a new line of inquiry here.
I’ve been doing web applications and unix systems programming for about 15 years now. Computer and network security has always been an aspect of what I do, but it’s never been my focus. I’m thinking it’s time to change that. I’m looking for a challenge, something that will keep me busy and learning for the rest of my life. Security is an unending struggle on an ever-changing field. I want to be doing something useful, and security is becoming more and more of an issue in daily life.
I’ve always been a generalist, and security seems to be a field where that’s valuable. It’s not about using one tool to do a specific job; it’s about understanding systems at multiple levels, how things interact and how they fail. It’s about how people interact with technology. It’s creative: there’s a lot of design that goes into making software both secure and usable.
I have a lot to learn; like I said, this has never been my focus. I need to understand unix systems and networking protocols at a much deeper level than I have before. I’ve said for years that you don’t learn anything from a working system. It’s when something fails that you have to go in under the hood and learn how it actually works. A corollary to that is that you need to really understand a system to figure out how it can break, or how it can be broken intentionally.
“Under the hood” means that I need to dust off my C programming chops and set aside the layers of abstraction that I’m used to. There’s also a lot of lore and literature specific to computer security that I need to absorb. There are tools for both attack and defense that I need to play with.
The best way to learn something is to try to explain it, so that’s what I’m going to do here. Let me know if it makes sense, if it’s useful. Let me know where there are gaps or unanswered questions, or where I’m just plain wrong.
I’ve been doing web applications and unix systems programming for about 15 years now. Computer and network security has always been an aspect of what I do, but it’s never been my focus. I’m thinking it’s time to change that. I’m looking for a challenge, something that will keep me busy and learning for the rest of my life. Security is an unending struggle on an ever-changing field. I want to be doing something useful, and security is becoming more and more of an issue in daily life.
I’ve always been a generalist, and security seems to be a field where that’s valuable. It’s not about using one tool to do a specific job; it’s about understanding systems at multiple levels, how things interact and how they fail. It’s about how people interact with technology. It’s creative: there’s a lot of design that goes into making software both secure and usable.
I have a lot to learn; like I said, this has never been my focus. I need to understand unix systems and networking protocols at a much deeper level than I have before. I’ve said for years that you don’t learn anything from a working system. It’s when something fails that you have to go in under the hood and learn how it actually works. A corollary to that is that you need to really understand a system to figure out how it can break, or how it can be broken intentionally.
“Under the hood” means that I need to dust off my C programming chops and set aside the layers of abstraction that I’m used to. There’s also a lot of lore and literature specific to computer security that I need to absorb. There are tools for both attack and defense that I need to play with.
The best way to learn something is to try to explain it, so that’s what I’m going to do here. Let me know if it makes sense, if it’s useful. Let me know where there are gaps or unanswered questions, or where I’m just plain wrong.
Amateur Erlang
February 18th, 2013This is based on the talk I gave at ErlangDC.
I don’t actually make my living programming Erlang, so I’m still a beginner in a lot of ways. I’ve been tinkering with it for the last year and a half or so, and in short, it’s been awesome. I’ve had a lot of fun; I’ve learned a ton, and what I’ve learned has been more broadly useful than I might have expected; and overall it’s definitely made me a better programmer.
So I’m going to talk about that experience: what you learn when you learn Erlang; some of the “ah-ha!” moments I’ve had – things that will give you a running start at the Erlang learning curve; and how to get some practical experience with Erlang before you dive into writing distributed, high-availability systems.
There’s also a sort of meta-learning, because then when you go to a different country, it’s not as jarring; you adapt more quickly. I found that once I’d gotten used to Erlang’s syntax, other languages – Coffeescript and Scala – didn’t look so weird. At work the other day, someone was doing a demo of iPhone development, and some of my co-workers were really thrown by Objective-C’s syntax. I was just like, “Oh yeah, now that you mention it, it does have an odd mix of Lisp-style bracket grouping and C-style dot notation. Whatever. It’s code.”
Working with Erlang also teaches you a fundamentally different way of solving problems, especially if, like me, you’re coming from an object-oriented (OO) background like Java or Python. It has functional language features like recursion and closures. It focuses on simple data structures, and gives you powerful tools for working with them. And it’s all about concurrency. Those all add up to something more than the sum of their parts. They’re also things that translate to other languages: You’ll see Erlang-style concurrency in Scala, and functional programming is showing up all over the place these days.
I went back and looked at the Python code and realized how much of it was OO modeling that doesn’t actually help solve the problem. In fact, it creates a bunch of its own problems. Obviously, you need a Game class and a Frame class, and the Game keeps a list of Frames. Then you very quickly get into all these metaphysical questions around whether a Frame should just be a dumb data holder, or whether it should be a fully self-actualized and empowered being, capable of accessing other frames to calculate its score and detect bad data. Putting all the smarts in the Game may be the easiest thing, but that brings up historical echoes of failed Soviet central planning, and just doesn’t feel very OO. And once you’ve got these classes, you start speculating about possible features: What if you want to be able to query the Game for a list of all the rolls – does that change how you store that info? In short, you can get really wrapped around the axle with all these design issues.
The Erlang solution sidesteps that whole mess. It just maps input to output. The input is a list of numbers, the output is a single number. That sounds like some kind of fold function. With pattern matching, you write that as one function with four clauses: End of game, strike frame, spare frame, normal frame.
So now when I write Python code, I use list comprehensions a lot more;
In the last year, I’ve also done a bunch of rich browser client Javascript programming with jQuery and Backbone.js. That’s a very functional style of programming. It’s all widget callbacks and event handling – lots of closures. (I don’t know who originally said it, but Javascript has been described as “Lisp with C syntax”.) Actually, I was coding in Coffeescript and debugging in Javascript. Coffeescript is essentially a very concise and strongly functional macro language for generating Javascript. So it was a really good thing to have the experience with Erlang going into that.
What helped me get used to Erlang’s syntax was realizing that what it looks like is English. Erlang functions are like sentences: You have commas between lists of things, semicolons between clauses, and a period at the end. Header statements like
You also have to realize that all the things you think of as control structures –
Here’s a cheat sheet:
Even with that, it’s still pretty idiosyncratic. You’ll find yourself making a bunch of syntax mistakes at first, and that’ll be frustrating. Let me just say that you’ll get used to it faster than you expect. After a couple weekends hacking on Erlang code, it’ll start to look normal.
In Erlang, recursion takes the place of all of the looping constructs and iterators that you would use in an OO language. Because it’s used for everything, there are well-established patterns for writing recursive functions. Since you use it all the time, you get used to it, and it stops being scary.
This is also where Erlang’s weirdnesses start working together. Immutable variables actually simplify recursion, because they force you to be clear about how you’re changing your data at each step of the recursion. Pattern matching and guard expressions make recursion more powerful and expressive, because they let you break out the stages of a recursion in a very declarative way. Let’s look at the basics of recursion with a very simple example: munging a list of data.
Like a story, a recursive function has a beginning, a middle, and an end. The beginning and end are usually the easiest parts, so let’s tackle those first. The beginning of a recursion is just a function that takes the input, sets up any initial state, ouput accumulators, etc., and recurses. In this case, we take an input list and set up an empty output list.
The end stage is also easy to define. When the input is an empty list, just return the output list.
The middle stage defines what we do with any single element in the list, and how we move on to the next one. Here, we just pop the first element off the input list, munge it to create a new element, push that onto the output list, and recurse with the newly-diminished input and newly-extended output. (And note that we add our new element at the beginning of the list, rather than the end – it’s an efficiency thing.)
That’s all there is to the basics of recursion. You may have multiple inputs and outputs, and there could be multiple middle and end functions to handle different cases (and we’ll see a more interesting example in a minute), but the basic pattern is the same.
As a coda to this, it’s worth mentioning that this is essentially what Erlang’s
The
The original concept of object oriented programming was that objects would be autonomous actors rather than just data structures. They would interact with each other by sending messages back and forth. You see artifacts of this like Ruby’s
In a sense, Erlang is more truly object oriented than OO languages, but you come to it by a roundabout way. Since even complex data structures are immutable, updating your data always creates a new reference to it. If you pass any data structure to a function, as soon as it modifies it, it’s dealing with a different data structure. So the only way to have something like global, mutable data is to have that reference owned by a single process and managed like so:
(You wouldn’t literally have code like this, but it’s conceptually what you’re doing.)
You could start it up and get new ids like so:
So anything that would be an object in an OO language is a process in Erlang. I hadn’t realized quite how true that was until I was messing around in the Erlang shell, and opened a file.
Hey, wait! That’s a process id. See?
So when you open a file, you don’t actually access it directly; you’re spawning off a process to manage access to it.
As with recursion and the
That’s pretty cool. You have the ease of scripting, with full access to Erlang’s libaries. Furthermore, you can set a node name or sname in your script, and then it can connect to other Erlang nodes. (The special
For example, here’s a simple way to grab a web page:
That’s actually pretty handy because you can fetch data from web services that way. I started with this and built out a really simple automated testing tool for a web service I was writing, in about 20 lines of code. You can do all sorts of useful little things like this. They’re a good way to get used to Erlang’s idioms, and you can gradually build in more complexity as you go.
You could also mock out back-end web services for testing. I was doing some browser-side Javascript development last summer, and didn’t have access to the server I’d be talking to. (It was running on an embedded device.) So I faked it up in Erlang with Spooky, which is a simple Sinatra-style framework. It went something like this:
ChicagoBoss is a richer, Django-like framework. It has an ORM, URL dispatching, and page templates (with Django syntax, no less). Wait, _Object_-Relational Mapper? What’s that doing in a functional language? Yeah, ok, really they’re proplists with a parameterized module and a bunch of auto-generated helper functions wrapped around them. They’re still immutable; don’t freak out. More experienced developers may argue about whether that’s the right way to do things, but it certainly makes ChicagoBoss more beginner-friendly. It also gives you some enticing extras like a built-in message queue and email server. The ChicagoBoss tutorial is really concise and well-written, so I’ll leave it at that.
If you want to get into the nuts and bolts of proper HTTP request handling, take a look atWebMachine. Most web frameworks leave out or gloss over a lot of the richness of the HTTP protocol. WebMachine not only gives you a lot of control over every step of the request handling, but actually forces you to think through it. It’s not the most intuitive for beginners, but it’s an education.
Those are the ones I’ve played with a bit, but there are lots more.
Even if you’re unlucky, and the code works perfectly, almost every piece of software out there could benefit from better documentation. Take advantage of your newbie status; write a tutorial. The people who wrote the software know it inside and out; it helps to have beginners writing for beginners. I can vouch that a great way to learn something is to try to explain it to someone else.
I don’t actually make my living programming Erlang, so I’m still a beginner in a lot of ways. I’ve been tinkering with it for the last year and a half or so, and in short, it’s been awesome. I’ve had a lot of fun; I’ve learned a ton, and what I’ve learned has been more broadly useful than I might have expected; and overall it’s definitely made me a better programmer.
So I’m going to talk about that experience: what you learn when you learn Erlang; some of the “ah-ha!” moments I’ve had – things that will give you a running start at the Erlang learning curve; and how to get some practical experience with Erlang before you dive into writing distributed, high-availability systems.
Foreign Travel
Learning a new programming language is like going to a foreign country. It’s not just the language, it’s the culture that goes with it. They do things differently over there. If you just drop in for a day trip, it’s going to be all weird and awkward; but if you stick around a bit, you start getting used to it; and when you go home, you find that there are things you miss and things that you’ve brought home with you.There’s also a sort of meta-learning, because then when you go to a different country, it’s not as jarring; you adapt more quickly. I found that once I’d gotten used to Erlang’s syntax, other languages – Coffeescript and Scala – didn’t look so weird. At work the other day, someone was doing a demo of iPhone development, and some of my co-workers were really thrown by Objective-C’s syntax. I was just like, “Oh yeah, now that you mention it, it does have an odd mix of Lisp-style bracket grouping and C-style dot notation. Whatever. It’s code.”
Working with Erlang also teaches you a fundamentally different way of solving problems, especially if, like me, you’re coming from an object-oriented (OO) background like Java or Python. It has functional language features like recursion and closures. It focuses on simple data structures, and gives you powerful tools for working with them. And it’s all about concurrency. Those all add up to something more than the sum of their parts. They’re also things that translate to other languages: You’ll see Erlang-style concurrency in Scala, and functional programming is showing up all over the place these days.
Bowling
A good example of this is the bowling game program. I’ve written about this before, so let me just recap it quickly. It’s a standard programming challenge: Calculate the score for a game in bowling. It’s fairly straightforward, but there are a bunch of tricky edge cases. The first time I did it was in Python as a pair programming exercise, and at the end I was pretty happy with the results. It came out to 53 lines of code. Then about a year later, we did the same thing at one of the Erlang meetups, and the solution that one of the experienced Erlang programmers turned in was about ten lines of code. Ten lines of clean, elegant code, not like a Perl one-liner. That blew my mind.I went back and looked at the Python code and realized how much of it was OO modeling that doesn’t actually help solve the problem. In fact, it creates a bunch of its own problems. Obviously, you need a Game class and a Frame class, and the Game keeps a list of Frames. Then you very quickly get into all these metaphysical questions around whether a Frame should just be a dumb data holder, or whether it should be a fully self-actualized and empowered being, capable of accessing other frames to calculate its score and detect bad data. Putting all the smarts in the Game may be the easiest thing, but that brings up historical echoes of failed Soviet central planning, and just doesn’t feel very OO. And once you’ve got these classes, you start speculating about possible features: What if you want to be able to query the Game for a list of all the rolls – does that change how you store that info? In short, you can get really wrapped around the axle with all these design issues.
The Erlang solution sidesteps that whole mess. It just maps input to output. The input is a list of numbers, the output is a single number. That sounds like some kind of fold function. With pattern matching, you write that as one function with four clauses: End of game, strike frame, spare frame, normal frame.
Bringing it Home
The thing is, once I’d seen the solution in Erlang, I was able to go back and implement it in Python; it came out to roughly the same number of lines of code, and was about as readable. That transfers, that way of solving problems. Instead of thinking, “What are the classes I need to model this problem domain?” start with, “What are my inputs and outputs? What’s the end result I want, and what am I starting from? Can I do that with simple data structures?”So now when I write Python code, I use list comprehensions a lot more;
for
loops feel kinda sketchy – clumsy and error-prone. Modifying a variable instead of creating a new one sets off this tiny warning bell. I use annotations and lambda functions more often, and wish I had tail recursion and pattern matching. I do more with lists and dictionaries; defining classes for everything feels like boilerplate.In the last year, I’ve also done a bunch of rich browser client Javascript programming with jQuery and Backbone.js. That’s a very functional style of programming. It’s all widget callbacks and event handling – lots of closures. (I don’t know who originally said it, but Javascript has been described as “Lisp with C syntax”.) Actually, I was coding in Coffeescript and debugging in Javascript. Coffeescript is essentially a very concise and strongly functional macro language for generating Javascript. So it was a really good thing to have the experience with Erlang going into that.
Community
The other thing about foreign travel is the people you meet. I’d like to make a little plug for the Erlang community. It’s still small enough to be awesome. Just lurk on the erlang-questions mailing list, and you can learn a ton. There are some really sharp people on it, and the discussions are a fascinating mix of academic and practical. You see threads that wander from theoretical computer science to implementation details to performance issues.Ah-ha! Moments
Like I said, Erlang has a different way of doing things. It’s not that it’s all that more complicated than other languages, but it’s definitely different. So I’m going to talk about some of the ah-ha! moments – the conceptual breakthroughs – that made learning it easier.Syntax
I’ll start with the syntax, which is probably the least important difference, but it’s the first thing that people tend to get hung up on. They look at Erlang code, and they’re all like, “Where are the semicolons? What are all these commas doing here? Where are the curly braces?” It all seems bizarre and arbitrary. It’s not. It’s just not like C.What helped me get used to Erlang’s syntax was realizing that what it looks like is English. Erlang functions are like sentences: You have commas between lists of things, semicolons between clauses, and a period at the end. Header statements like
-module
and -define
all express a complete thought, so they end with a period. A function definition is one big multi-line sentence. Each line within it ends with a comma, function clauses end with a semicolon, and there’s a period at the end. case
and if
blocks are like mini function definitions: They separate their conditions with semicolons and end with end
. end
*is* the puctuation; you don’t put another semicolon before it. After end
, you put whatever punctuation would normally go there.You also have to realize that all the things you think of as control structures –
case
, if
,receive
, etc. – are functions.Here’s a cheat sheet:
Recursion
Recursion is not something you use much in OO languages because (a) you rarely need to, and (b) it’s scary – you have to be careful about how you modify your data structures. Recursive methods tend to have big warning comments, and nobody dares touch them. And this is self-reinforcing: Since it’s not used much, it remains this scary, poorly-understood concept.In Erlang, recursion takes the place of all of the looping constructs and iterators that you would use in an OO language. Because it’s used for everything, there are well-established patterns for writing recursive functions. Since you use it all the time, you get used to it, and it stops being scary.
This is also where Erlang’s weirdnesses start working together. Immutable variables actually simplify recursion, because they force you to be clear about how you’re changing your data at each step of the recursion. Pattern matching and guard expressions make recursion more powerful and expressive, because they let you break out the stages of a recursion in a very declarative way. Let’s look at the basics of recursion with a very simple example: munging a list of data.
Like a story, a recursive function has a beginning, a middle, and an end. The beginning and end are usually the easiest parts, so let’s tackle those first. The beginning of a recursion is just a function that takes the input, sets up any initial state, ouput accumulators, etc., and recurses. In this case, we take an input list and set up an empty output list.
As a coda to this, it’s worth mentioning that this is essentially what Erlang’s
lists:map/2
function does, so you could replace all the forgoing with something like:lists
module has a number of other functions for doing simple list munging like this.More OO than OO
The next thing is Erlang process spawning and inter-process communication. Again, this is one of those things that in normal languages is rarely used and fraught with peril. In Java, multithreaded applications involve a lot of painstaking synchronization, and you still often get bit by either concurrent modification errors or performance issues from overly aggressive locking. In Erlang of course, you do it all the time. Understanding why requires a bit of background.The original concept of object oriented programming was that objects would be autonomous actors rather than just data structures. They would interact with each other by sending messages back and forth. You see artifacts of this like Ruby’s
send
method. (Rather than invoking a method directly, you call send
on the object with the method name as the first argument.) In practice, objects in OO languages are little more than data structures with function definitions bolted on. They’re not active agents; they’re passive, waiting around for a thread of execution to come through and do something to them.In a sense, Erlang is more truly object oriented than OO languages, but you come to it by a roundabout way. Since even complex data structures are immutable, updating your data always creates a new reference to it. If you pass any data structure to a function, as soon as it modifies it, it’s dealing with a different data structure. So the only way to have something like global, mutable data is to have that reference owned by a single process and managed like so:
State
is any data structure, from an integer to a nested tuple/list/dictionary structure. You’d spawn this loop function as a new process with its initial state data. From then on it would receive messages from other processes, update its state, maybe send a respose, and then recurse with the new state. The key here is that it’s a local variable to this function; there’s no way for any other process to mess with it directly. If you spawn another process with this function, it will have a separate copy of the State
, and any updates it makes will be completely independent of this. The simplest example I can think of would be an auto-incrementing id generator:file:open/2
says it returns {ok, IoDevice}
on success. Let’s take a look at that:As with recursion and the
lists
module, Erlang has modules like gen_server
andgen_event
which gieve you a more formal and standard way to do this sort of thing. They add a lot of process management on top of this basic communication, so I won’t get into the details here, but check it out.Getting Practice
Ok, so once you’ve gotten past the language concepts, how can you actually get some practice with it? Something a little more low-key than massively distributed high-availability systems?Scripting
Probably the easiest way to start, if you just want to get comfortable with the language, is shell scripting.escript
lets you use Erlang as a scripting language.%%!
comment says to pass the rest of the line through as parameters to erl
, the Erlang emulator.)Testing Tools
In fact, testing tools are another way to get in some real experience with Erlang. You could do something simple to test web service functionality, or something more complicated and concurrent for load testing.You could also mock out back-end web services for testing. I was doing some browser-side Javascript development last summer, and didn’t have access to the server I’d be talking to. (It was running on an embedded device.) So I faked it up in Erlang with Spooky, which is a simple Sinatra-style framework. It went something like this:
Web Apps
If you’re coming from a web background, that’s another good place to start tinkering with Erlang. Instead of trying to think up an Erlang project, just do your next personal web app in Erlang. Erlang has a range of web application frameworks, so you can decide how much of the heavy lifting you want to do. As you saw, Spooky lets you simple stuff easily, but it’s fairly low-level.ChicagoBoss is a richer, Django-like framework. It has an ORM, URL dispatching, and page templates (with Django syntax, no less). Wait, _Object_-Relational Mapper? What’s that doing in a functional language? Yeah, ok, really they’re proplists with a parameterized module and a bunch of auto-generated helper functions wrapped around them. They’re still immutable; don’t freak out. More experienced developers may argue about whether that’s the right way to do things, but it certainly makes ChicagoBoss more beginner-friendly. It also gives you some enticing extras like a built-in message queue and email server. The ChicagoBoss tutorial is really concise and well-written, so I’ll leave it at that.
If you want to get into the nuts and bolts of proper HTTP request handling, take a look atWebMachine. Most web frameworks leave out or gloss over a lot of the richness of the HTTP protocol. WebMachine not only gives you a lot of control over every step of the request handling, but actually forces you to think through it. It’s not the most intuitive for beginners, but it’s an education.
Those are the ones I’ve played with a bit, but there are lots more.
Contributing
One of the things I’ve run across with these, as with most open-source tools, is that there are “opportunities to contribute.” We’d love it if all of our software tools worked perfectly all the time, but the next best thing is if the source is on GitHub. Working with Spooky, I tripped over an odd little edge case. It turned out to be a simple fix – half a dozen lines of code. I forked it, fixed it, and put in a pull request. Had a similar experience with the ChicagoBoss templating code. They were both tiny contributions, but you still get a warm fuzzy feeling doing that. Throw in a few extra unit tests if you really want to make the owners happy.Even if you’re unlucky, and the code works perfectly, almost every piece of software out there could benefit from better documentation. Take advantage of your newbie status; write a tutorial. The people who wrote the software know it inside and out; it helps to have beginners writing for beginners. I can vouch that a great way to learn something is to try to explain it to someone else.
Adventure Awaits!
What I hope I’ve left you with is a sense that Erlang is worth learning in its own right, that it’ll teach you new things about programming that you can apply in any language; that while it’ll be strange at first, it’s totally learnable; and that there are any number of low-intensity ways to get started using it. Most importantly, though, I want to leave you with the sense that this is fun. Learning a new language, new problem-solving tools, new ways of expressing ideas, that’s all fun. You’ve got an adventure ahead of you.Tags: erlang
Posted in programming | No Comments »
Remedial Javascript
May 10th, 2012My background here is that I’ve worked with Javascript on and off for years, but I never actually wrapped my head around how its inheritance works until just recently. I started out doing little bits of UI bling, then moved onto dynamic forms and simple ajax requests (pre-jQuery), then fancier stuff with jQuery and friends. So I’ve been able to get a lot done. It’s only when reading something like Javascript: the Good Parts that I’d get this nagging sense that I was missing something fundamental. Lately though, I’ve started working with backbone.js, doing serious model-view-controller programming in the browser, and that nagging has become loud and persistent.
Part of the problem is that I’m coming from a Java background, and Javascript looks a lot like it. It looks like it has the Java-style class inheritance that I’m familiar with. It’s got syntax like:
So I instinctively think, “Great, Object is a class, and o is an instance of that class.” You can think that, and Javascript will mostly work the way you expect. You can write a fair amount of code believing that.
But it’s wrong.
Javascript has prototypal inheritance, not class inheritance. I knew that, and seeing this class-y syntax gave me the feeling of being lied to. Not a malicious lie, but a little white “I’m glossing over the details here” lie. And once I got into trying to create my own class hierarchy, or extending someone else’s, those details started to matter. Things just didn’t work quite the way I expected. Mostly, I’d be missing values that I thought I’d inherited from somewhere. Even then, I could figure out what had gone wrong and fix it on a case-by-case basis, but that made it clear that there was something important I really just didn’t understand.
I’ve read a number of books and articles that talk about Javascript’s prototypal nature, and how you construct and extend objects, but my sense of what was going on under the hood never quite clicked. So I finally did what I always end up having to do to make sense of some bit of programming weirdness: step away from the big program I’m working on, and sit down at a shell to run some little experiments. In this case, it’s Chrome’s Javascript console. (Lines with a > are what I type. Lines without are the console’s response.) Starting off with the previous example:
Great, I’ve created a new Object. But what is this “Object” thing really?
Wait, so Object is a function. Huh?
The deal is that new is what’s actually doing the heavy lifting here, creating a new object. The Object function is just filling in the details. There’s nothing magic about it. You could call new on any function, and you’d get a new object. If that function sets any properties on this, they’ll show up in it. If that function has a property named prototype, the new object will inherit properties from it. prototype should be an object, but since functions are also objects, you won’t get an error if you mess up.
For example:
Here, everything looks fine until you try to get b.kingdom. The trouble is that kingdom is not a property of A; it’s just a property that A sets on this.
The right thing would be:
Now, any properties you add to a will be inherited by b:
But the properties that B set on b override a‘s properties:
b doesn’t inherit properties from B:
And b keeps its relation to a even if B changes its prototype
So that’s what it does, but what are the relationships between all these objects and functions?
So we have this odd sort of dual inheritance going on. b is an instance of B, and has a prototype of a. The instanceof relationship is purely historical: Changes to B don’t affectb after its construction. The prototype relation is ongoing and dynamic. Changes to a‘s properties show up in b (unless b overrides them). Perhaps even more oddly, b is an instance of a‘s constructor, but there’s no direct connection from B to A, only through a. It looks like this in my head:
If you got all the way through this article and this stuff still doesn’t make sense, grab a Javascript console and try it out yourself. Work through the examples. Type them in by hand; don’t copy-paste them. (Seriously, that makes a huge difference.) Ask your own questions, come up with your own experiments. Tinker.
Part of the problem is that I’m coming from a Java background, and Javascript looks a lot like it. It looks like it has the Java-style class inheritance that I’m familiar with. It’s got syntax like:
But it’s wrong.
Javascript has prototypal inheritance, not class inheritance. I knew that, and seeing this class-y syntax gave me the feeling of being lied to. Not a malicious lie, but a little white “I’m glossing over the details here” lie. And once I got into trying to create my own class hierarchy, or extending someone else’s, those details started to matter. Things just didn’t work quite the way I expected. Mostly, I’d be missing values that I thought I’d inherited from somewhere. Even then, I could figure out what had gone wrong and fix it on a case-by-case basis, but that made it clear that there was something important I really just didn’t understand.
I’ve read a number of books and articles that talk about Javascript’s prototypal nature, and how you construct and extend objects, but my sense of what was going on under the hood never quite clicked. So I finally did what I always end up having to do to make sense of some bit of programming weirdness: step away from the big program I’m working on, and sit down at a shell to run some little experiments. In this case, it’s Chrome’s Javascript console. (Lines with a > are what I type. Lines without are the console’s response.) Starting off with the previous example:
The deal is that new is what’s actually doing the heavy lifting here, creating a new object. The Object function is just filling in the details. There’s nothing magic about it. You could call new on any function, and you’d get a new object. If that function sets any properties on this, they’ll show up in it. If that function has a property named prototype, the new object will inherit properties from it. prototype should be an object, but since functions are also objects, you won’t get an error if you mess up.
For example:
The right thing would be:
B AIn short, an object inherits a type from its constructor and behavior from its prototype. In a class-based language, an object gets both of these from its class, but in Javascript, “What is it?” and “What can it do?” are different questions with different answers.
/ \ /
b---a
If you got all the way through this article and this stuff still doesn’t make sense, grab a Javascript console and try it out yourself. Work through the examples. Type them in by hand; don’t copy-paste them. (Seriously, that makes a huge difference.) Ask your own questions, come up with your own experiments. Tinker.
0 comments:
POST A COMMENT