Showing posts with label open source. Show all posts
Showing posts with label open source. Show all posts

Saturday, May 28, 2011

Go on App Engine Example - Part 1

The App Engine team recently announced support for Go as a runtime for use in apps. Summary up front, the App Engine SDK for the Go runtime is the easiest way I've found yet to get started with Go. As I change my code, it is recompiled in the background when I make a request to my app, so it feels very much like developing in a scripting language.

I've been excited about the Go language for some time now (specifics on why will have to wait for another post) so I was eager to try it out in one of my favorite platforms: App Engine. I wanted to start with something small, so I wrote a simplified version of a web app that I've been itching to write lately, a site for hosting plain text content. Specifically, I want something that preserves whitespace, allows me to line up columns of text, and supports non-English characters (Unicode). Those are the kinds of things I need to share and talk about code. Also there is a great deal more you can do with plain old monospaced text, maybe you'll find this useful as well.

With that objective in mind I give you the Plain Text Machine. This little app lets you enter a small amount of text, somewhere around 2,000 characters, and gives you a link that others can visit to see an HTML reproduction of your writing. I mentioned I wanted to keep this simple, so here's the odd little bit, this app doesn't store your text anywhere. The URL that is generated contains the text, hence the somewhat low limit on message length. It certainly keeps the app simple, the most complex logic is that which converts the text from the URL into HTML.

A request starts by hitting the Init function:
func init() {
http.HandleFunc("/", handle)
http.HandleFunc("/show", show)
}
The main page, at /, is just static content, we're just interested in the /show handler. It looks like this:
func show(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/html; charset=utf-8")
// Get the message from the URL.
PrintHtml(utf8.NewString(r.FormValue("msg")), w)
}
The above does two things, sets the content type of our response so that the browser will know it is HTML, and reads the message URL parameter from the request to convert it to HTML.

The PrintHtml method prints out some boilerplate HTML then reads the message one character at a time and converts each character to its HTML-safe equivalent. There's a tiny bit of complexity to make sure that the whitespace is preserved instead of being collapsed as would normally be done with repeated spaces in HTML. Here's the code:
func PrintHtml(text *utf8.String, out http.ResponseWriter) {
spaces := false

fmt.Fprint(out, textHeader, middle)
for i := 0; i < text.RuneCount(); i++ {
currentChar := text.At(i)

if currentChar == 32 && !spaces {
// A first space.
fmt.Fprint(out, " ")
spaces = true
} else {
if currentChar == 32 {
// Space following another space
fmt.Fprint(out, "&nbsp;")
} else if currentChar == 10 {
// Newline
fmt.Fprint(out, "<br>")
} else if currentChar == 9 {
// Tab
fmt.Fprint(out, "&nbsp;&nbsp;&nbsp; ")
} else if currentChar == 38 {
// &
fmt.Fprint(out, "&amp;")
} else if currentChar == 60 {
// <
fmt.Fprint(out, "&lt;")
} else if currentChar == 62 {
// >
fmt.Fprint(out, "&gt;")
} else if currentChar < 31 || currentChar == 128 {
// Skip control characters.
} else if currentChar < 127 {
fmt.Fprintf(out, "%c", currentChar)
} else {
fmt.Fprintf(out, "&#%d;", text.At(i))
}
spaces = false
}
}
fmt.Fprint(out, footer)
}
The textHeader, middle, and footer variables are string constants containing the wrapper HTML which gives style information.

If you're interested in the full source code for this tiny little app, you can find it in the Plain Text Machine open source project. Hopefully this example provides an easy to understand picture of what Go code for App Engine looks like.

I had quite a bit of fun putting together this app. By keeping it simple I was able to go from idea to done in less time than it took me to write this blog post. As an added bonus, having an app with no persistent storage brings up some interesting philosophical questions. For example, if a message is created but no one stores the link to it, does it still exist?

Tuesday, June 22, 2010

A Simple Testing Library for C

To prepare for a recent post graduate computer science class, I wrote a small library in C which aids in the creation of lightweight, unit-test-like programs. The code can be found here, and using it looks a bit like this:
#include"asserts.h"

int main(void)
{
c7e3_assert(1 == 1, "1 should equal 1");
c7e3_assert(2 == 2, "2 should equal 2");
c7e3_report();
return 0;
}
The design follows the KISS principle and I think it is a nice fit to the simplicity of C. While there is not much to it, I wrote numerous tests using it over the past couple of months and all of that testing certainly paid off.

Friday, June 18, 2010

JavaScript Tricks to Speed up Your Site

One of the techniques which makes the web so powerful is the ability to load code, images, and other resources from all over the Internet. So often though, the process of loading these resources and ensuring that all of the required pieces are in place leads to a slow experience for visitors. With the ability to include so much code from across the web, visiting a site could potentially be like installing a new program when it comes to the amount of stuff that needs to be downloaded.

With this in mind, there are a couple of nifty tricks that can help make your app more responsive and I've written up an example site and testing server that shows some ideas for speeding up the user experience when you need to wait for the DOM to load or for additional JavaScript to be fetched and run. We'll begin with document operations.

Often the JavaScript running on a page manipulates the DOM, using document.getElementById here and document.createElement there. In order to ensure that all the pieces of the page are in place, web programmers often take advantage of the onload callback. It might be used like this
<body onload="runMyCodeNow()">
Using this technique ensures that all of the things your code might want to read and write from the page are in place. All images have been downloaded, CSS rules have been applied, the layout is all there. However, all of this comes with a cost, your code doesn't run until every last resource has been fetched and rendered. Even the little footer at the bottom of the page, for example, that your code doesn't care about.

There is a another way, we could request that resources be loaded in parallel and start executing our code before the page is fully loaded. Chances are, your code doesn't need the complete page to be loaded before it starts running, and running before onload will reduce the delay for your users. Before I dive into how this can be accomplished, lets look at an example which uses the old fashioned way.

Lets say you have a web page, a little HTML which includes five JavaScript files. One may be a library used to do animation, another one for loading the users data. In any case, all of these files need to be loaded and some of them depend on others.

The biggest bottleneck for your users is almost certainly having all of these resources load. Network latency is a killer, and something that is often overlooked during development. To create a simulated network environment which can give a more realistic (or even pessimistic) view of the cost of loading these resources, I wrote a "slow server" which can introduce a delay to the file requested. Here is the code for my testing server (designed to run on App Engine):
def FilePath(path):
"""The requested path into a local file path."""

return os.path.join(os.path.dirname(__file__), 'files', path[1:])


class SleepyRenderer(webapp.RequestHandler):
"""Serves the requested page with a client configured delay.

Delay is given as a URL parameter in hundredths of a second to delay.
For example, 200 means wait 2 seconds before responding.

Example request:
http://localhost:8080/hi.html?delay=300&contenttype=text/html
"""

def get(self):
path = self.request.path
delay = self.request.get('delay')
content_type = self.request.get('contenttype') or 'text/html'
if delay:
time.sleep(int(delay)/100)
http_status = 200
requested_file = None

try:
requested_file = open(FilePath(path))
self.response.out.write(requested_file.read())
requested_file.close()
except IOError:
http_status = 404

self.response.set_status(http_status)
self.response.headers['Content-Type'] = content_type


def main():
application = webapp.WSGIApplication([('/.*', SleepyRenderer)],
debug=True)
util.run_wsgi_app(application)


if __name__ == '__main__':
main()
With the above code we can introduce a delay on each individual file. To see this in action with our example, here is some HTML which shows a traditional approach, include script includes and an onload callback when everything has loaded.
<html>
<head>
<script src="/testa.js?delay=500&contenttype=text/javascript"></script>
<script src="/testb.js?delay=400&contenttype=text/javascript"></script>
<script>
function init() {
document.getElementById('output');
output.innerHTML = [
'a is ' + a,
'b is ' + b,
'c is ' + c,
'd is ' + d,
'e is ' + e
].join('<br>');
}
</script>
<script src="/testc.js?delay=300&contenttype=text/javascript"></script>
</head>
<body onload="init()">
<script src="/testd.js?delay=200&contenttype=text/javascript"></script>
<div id="output"></div>
<script src="/teste.js?delay=100&contenttype=text/javascript"></script>
<script src="/testa.js?delay=500&contenttype=text/javascript"></script>
</body>
</html>
With the above, the page takes several seconds to load and when the very last script has loaded, the 'output' div gets its contents. In many cases, the code really doesn't need to wait for all resources to load, only the ones that are necessary for the code to run. In this case, since the information is added to the output div, we need the output div to exist in the DOM, but we may not need the entire page to load.

If you look at this loading process in a profiler you might see something like this:Now for our first nifty trick. One way to check to see if the necessary prerequisites are present, is by polling the DOM or the JavaScript environment, to see if conditions are right for the code to run. Here is an example of how this code might be rewritten when using some polling helper functions:
    <script>
loader.whenNodePresent('output',
function() {
var output = document.getElementById('output');
loader.whenReady(function() {return window['a'];},
function() {
output.innerHTML += 'a is ' + a + '<br>';
});
loader.whenReady(function() {return window['b'];},
function() {
output.innerHTML += 'b is ' + b + '<br>';
});
loader.whenReady(function() {return window['c'];},
function() {
output.innerHTML += 'c is ' + c + '<br>';
});
loader.whenReady(function() {return window['d'];},
function() {
output.innerHTML += 'd is ' + d + '<br>';
});
loader.whenReady(function() {return window['e'];},
function() {
output.innerHTML += 'e is ' + e + '<br>';
});
})
</script>
The code to track the prerequisites and poll is quite simple:
loader.waiting = [];


loader.whenReady = function(testFunction, callback) {
if (testFunction()) {
callback();
} else {
loader.waiting.push([testFunction, callback]);
window.setTimeout(loader.checkWaiting, 200);
}
};


loader.checkWaiting = function() {
var oldWaiting = loader.waiting;
var numWaiting = oldWaiting.length;
loader.waiting = [];
for (var i = 0; i < numWaiting; i++) {
if (oldWaiting[i][0]()) {
oldWaiting[i][1]();
} else {
loader.waiting.push(oldWaiting[i]);
}
}

if (loader.waiting.length > 0) {
window.setTimeout(loader.checkWaiting, 200);
}
};


loader.whenNodePresent = function(nodeId, callback) {
loader.whenReady(function () {
return document.getElementById(nodeId);
}, callback);
};
In the above we use the whenReady function which takes a couple of functions, one to return a truthy or a falsey value, and one to call back when the first function evaluates to true. If the condition function isn't true when this first call is made, we check back every so often to see if it is ready.

With these changes, we shave several seconds off of the user perceived loading time. Specifically we no longer need to wait for the duplicate load (of the testa script) at the end of the body. The page also appears to be more responsive because the later script's messages appear just after they load but before the page is complete.

Now that we've seen a way to work around the need for an onload callback, lets look at another place we can tweak the browser's behavior to make a web page more responsive: dynamic script loading.

The most straightforward way to include new code in your page is to use a script tag, something like:
<html>
<head>
<script src="some_great_sites_javascript">
...
When the browser's JavaScript interpreter encounters this script src, it stops whatever it's doing and fetches that resource. It doesn't do any more rendering or executing of code until it's finished. This behavior varies a bit in different browsers and is likely an artifact of an old design in which this kind of single threaded behavior was the only option. Since some sites might depend on this linear behavior to get a script's dependencies all in order, this quirk might be with us for a long time. Most of this time, waiting like this is a really silly idea. How often do the scripts that you include depend on one another?

There are a few parts to this trick. The first is to not put all of script includes in the HTML, you could have JavaScript add new script elements to the page which will cause new code to be loaded as needed. In this way, you could load only the resources that are needed at the moment, perhaps some resources would not end up being requested at all. Including a new script could be done in two ways:
document.write('<script src="somefile.js"></script>');
or
var newScript = document.createElement('script');
newScript.src = 'somefile.js';
document.body.appendChild(newScript);
Each of the above is appropriate in different situations. Document write adds HTML directly into the page at the point where the page is being loaded, it should only be used for script tag inclusion if the page is not yet loaded. If that page is loaded, using document.write to add the script tag will wipe out the existing body entirely. I've seen this issue in the wild, if you assume document.write is always safe, you'll be bitten when using it after the page has loaded.

Instead you can perform a check to see if document.body exists, if it does then use document.body.appendChild. If it does not yet exist, use document.write. The code for this loader logic might look something like this:
loader.loadScript = function(url) {
if (document.body) {
var newScript = document.createElement('script');
newScript.type = 'text/javascript';
newScript.src = url;
document.body.appendChild(newScript);
} else {
document.write('<scr' + 'ipt type="text\/javascript" src="' +
url + '"><\/scr' + 'ipt>');
}
};
Now we can request that new JavaScript code be loaded on the fly and it works when the page has not yet finished loaded as well as after it has.

There is one more trick we can add to this loader. Some browsers will interpret the JavaScript in the order in which the scripts were requested, not the order in which they finished loading. That means that a fast loading script further down the list won't be run until a slower script, which appears above it, is loaded. One way we could defeat this delay, is to break the script includes out of linear execution in the JavaScript. If you use setTimeout to introduce a delay in adding the script include to the page, then the code which sets up the script requests can finish quickly and the browser will get back to the script requests later without the same linear constraints. In our code, we wrap the section of loader.loadScript in a short timeout as follows:
loader.loadScript = function(url) {
window.setTimeout(function() {
if (document.body) {
var newScript = document.createElement('script');
newScript.type = 'text/javascript';
newScript.src = url;
document.body.appendChild(newScript);
} else {
document.write('<scr' + 'ipt type="text/javascript" src="' +
url + '"><\/scr' + 'ipt>');
}
}, 1);
};
With the above changes in place, our example page from before now loads like this when profiled (note that the messages appear in the order that the scripts were loaded, we don't have to wait for everything before we edit the page):Through the course of this post, I've written a small library for using these tricks when loading JavaScript dynamically in the page as well as a server for trying it out. These are available here as open source code. There are some improvements that could be made here. Off the top of my head, the checkWaiting function could eventually time out if a condition continues to not be met. Also the loader could do more to check to see if a requested script has already been loaded. Any more ideas?

Wednesday, October 15, 2008

Twitter Client

As a proof of concept for using the sippycode HTTP library which I wrote about in my last post, I decided to create a simple text console client for Twitter. Download the Twitter terminal application here.

Twitter's RESTful API is quite simple, and I wrote an open source library for Twitter based on the sippycode HTTP library in a few minutes. Here's an example of posting a new update (tweeting):
import sippycode.http.core as http_core
import sippycode.auth.core as auth_core

class TwitterClient(object):

def __init__(self, username, password):
self._credentials = auth_core.BasicAuth(username,
password)

def update(self, message):
request = http_core.HttpRequest(method='POST')
http_core.parse_uri(
'http://twitter.com/statuses/update.xml'
).modify_request(request)
request.add_form_inputs({'status': message})
self._credentials.modify_request(request)
client = http_core.HttpClient()
response = client.request(request)
return response
In the above, the client sends an authenticated POST to the updates URL. Using the TwitterClient in your code looks like this:
client = TwitterClient('my-username', 'my-password')
client.update('Try out this Twitter client: http://oji.me/wP')
To try out this Twitter console app, unpack the download and run sippy_twitter.py. With it, you can update your status on Twitter or read the updates from your friends. When reading, the client displays five updates at a time, since showing more at once would likely cause some to scroll off the top of the screen (assuming the terminal displays twenty-five lines).

This simple application was designed to be a proof of concept, but it's really grown on me. Cycling through all of my friend's updates doesn't require any scrolling, and it feels snappier than the web interface. It seems like others are enjoying this terminal client too.

There are quite a few ways that this client could be improved, so there's plenty of opportunity to pitch in if you are interested. I have received feature requests from friends who previewed this app, such as: support command line arguments which will allow the client to perform updates when being run from another program, show a running countdown from 140 characters as you are typing your update (could probably be done using ncurses), ability to follow users, and read updates from just one user. If you'd like to participate in any of these, let me know in the comments.

Fire up your terminal and give this client a try. Why not post an update to @jscud right now?

Monday, October 13, 2008

An Open Source Python HTTP Client

At Super Happy Dev House 27, I made significant progress on an open source library for making HTTP requests in Python. For the past few years I've been working with web services and APIs (SOAP, REST (wikipedia) - specifically AtomPub, etc.) and I wanted to create an HTTP library which is simple, clean, and precise. Python has a couple of great HTTP libraries already, but one of them is a bit too low level (httplib) and the other is too high level (urllib2).

For example, in httplib you call a method to send data as if you are writing to a file (httplib uses sockets, after all). Required HTTP headers like Content-Length are not calculated for you. You'll need to handle cookies and redirects on your own. On the plus side, you get full control of what is being sent. The higher level library, urllib2, is built on top of httplib. It adds some handy abstractions, like calculating the Content-Length, but it also has some limitations. I haven't yet been able to figure out how to perform a PUT or DELETE with urllib2.

When making HTTP calls to web services, there are often a large number of HTTP headers, URL parameters, and components to the request. Making a request feels like making a function call in most HTTP libraries. In the past, I've wrapped these functions with successive layers containing more and more function parameters. For example, in a request to send a photo and metadata to PicasaWeb, you need to include an Authorization token, Content-Type specifying a MIME-multipart request and the multipart boundary, and a multipart payload consisting of the Atom XML describing the photo and the photo's binary data. If you add in the the ability to specify other headers and URL parameters, your function call might look like this:
def post_photo(url, url_parameters, escape_parameters, 
photo_mime_type, photo_file_handle,
photo_file_size, metadata_xml,
metadata_mime_type, auth_token,
additional_http_headers)
...

# Sets the request's Host, port, and uri.
# Makes the request into a MIME multipart request,
# adjusts the Content-Type and recalculates
# Content-Length.
# Sets the Authorization header
post_photo('http://picasaweb.google.com/data/'
'feed/api/user/userID/albumid/albumID', None,
False, 'image/jpeg', photo_file, photo_size,
atom_xml, 'application/atom-xml',
client_login_token, None)
To use the above, you have to gather all of the information in one place, and make the function call. There are cases where you want a design like the above.

However, more and more I think of ways the program could be more cleanly structured if this information could be compartmentalized. This new library relies on an HttpRequest object which various parts of the program modify. Once all of the modifications have been applied, the fully constructed request is passed to an HttpClient which communicates with the server using httplib or urlfetch if you happen to be on Google App Engine. (Support for more HTTP libraries is certainly possible.)

The photo posting example from above could look something like this. Keep in mind that these steps could be carried out in a different order in different segments of code.
photo_post = HttpRequest(method='POST')
# Sets the Authorization header
client_login_token.modify_request(photo_post)
# Adds to the body and calculated Content-Length,
# sets the Content-Type.
photo_post.add_body_part(atom_xml,
'application/atom+xml')
# Makes the request into a MIME multipart request,
# adjusts the Content-Type and recalculates
# Content-Length.
photo_post.add_body_part(photo_file, 'image/jpeg',
photo_size)
# Sets the request's Host, port, and uri.
parse_uri('http://picasaweb.google.com/data/'
'feed/api/user/userID/albumid/albumID'
).modify_request(photo_post)
In fact, the above code could make up the body of the post_photo function described in the first code snippet.

I created an open source project for this and other small projects called sippycode (a play on sippy cup). This is a place where code can grow up.

Sunday, September 28, 2008

In Praise of screen

I've used screen for a few years now, but I only recently learned about one of its highly helpful features. As I was using my XO laptop in text only mode (ctrl-alt-fn-2), I was using screen to simulate multiple terminals, and I needed to copy and paste text between them. In the past, I've usually used screen when ssh-ing into a machine, and putty (my ssh client of choice) provided copy and paste, so I had never needed screen's system.

It turns out, screen's built-in, cross-window copy-paste system is a breeze to use. Press ctrl-a [ to enter copy mode, press enter to mark the start point and enter again to mark the end point. You have now copied the text. Switch to the desired window, and paste in the text using ctrl-a ].

This feature is especially handy in the XO laptop, where I've never been able to figure out how to copy and paste in the Terminal Activity.

For me, screen's most useful feature has been the ability to detach and reattach to a session which continues to run on the server. If I lose my ssh connection, all of my processes continue to run, and I can reattach to my screen session as if nothing ever happened.

Screen can be difficult to understand if you've never seen it in action, so I recommend watching this video. You can also learn more in this tutorial.

Sunday, September 07, 2008

A Revived Project

q12 is back!

I'm slowly starting back up again on my note taking wiki application. I created my own version of TiddlyWiki several months ago, but then started working on other projects. I'm planning to rewrite my note taking Ajax application to run on Google App Engine, and as I was getting started I realized that there were a few things missing from the Ajax library that I had written as part of this project.

I had created my own simple unit test framework in JavaScript, and I finally got around to uploading the unit tests for the library to the open source project. I've also been learning about manipulating browser cookies from within JavaScript. Aside: Cookie's in JavaScript are weird! When you say document.cookie = something, reading document.cookie doesn't give you the same thing back (the expiration, domain, and path information are squirreled away somewhere else).

I've also added a minified version of the q12 library, it weighs in at a mere 10k. Download the library today!

Wednesday, August 20, 2008

Dirt Simple CMS

I recently created an App Engine app to run www.jeffscudder.com. At the moment the code is extremely simple, and I get so few visitors to that web page that I doubt I will need anything complicated.

When I write blog posts and web pages, I have always preferred to just edit the HTML, and I have always wanted a simple content management system that just let me edit the HTML, JavaScript, CSS, ect. in the browser. Blogger comes awfully close to the perfect tool in my opinion, but it is geared towards displaying a series of posts. I wanted a landing page with links to all of the other content I put out there in the blagoweb. And I wanted to be able to host the simple web app's that I write (like the recently mentioned password generator).

With those design goals in mind, I set out to create my super simple content management system. It runs on App Engine, and the admin (me) is able to sign in to a special secret /content_manager page which lets me assign a specific blob of text to the desired URL under my domain. I can also set some basic metadata, like the content type (so that your browser knows how the content should be rendered) and cache control information, since HTTP caching is excellent and saves puppies from drowning in lakes (ok seriously it will alleviate congestion and unnecessary traffic when you want to give the same content to thousands or millions of people).

Editing pages through the /content_manager looks like this:




I've also decided to open source the code and I called the project scud-cms. Since App Engine is free for you to sign up, you can just upload this code and start setting your own content from right there in the browser.

(P.S. The idea for this simple content manager is very similar to one of my earlier projects: Scorpion Server, with which an authorized user could set the content at just about any URL they wanted.)

Monday, April 21, 2008

Ubuntu Hardy Heron

Last week, I downloaded the beta release of Ubuntu 8.04 (Hardy Heron) to give it a try. I've been meaning to migrate our last Windows XP machine over to Linux for some time now (the other three computers I use regularly are Linux machines), but I've been reluctant to backup, repartition, and take the plunge. It seems like this is a common feeling among computer owners, but I think the Ubuntu community may have found an effective solution.

And the name of this new innovation: Wubi. Pop in the Ubuntu CD while running in Windows, and an auto-run installer opens which allows you to install Linux alongside Windows. When you reboot your computer, you see a menu of which operating system to boot into: Windows or Ubuntu. This means you can try out Ubuntu on your computer with zero risk to your existing files. In fact, you can access the files on your Windows installation from within Ubuntu. And if you decide Ubuntu is not your you, you can uninstall it as you would any other Windows program. Pretty slick.

Managing in the installation is just the beginning of the improvements the team has made. After I installed the latest version and booted into Ubuntu, it told me that there were propriety drivers for my NVIDIA graphics card and asked if I wanted to install them. I clicked the button, the download started, and I was up and running with 3D accelerated graphical desktop effects. I think I could sit there opening and closing windows all day. I'm looking forward to the upcoming official release.

Monday, March 24, 2008

XO Laptop

My XO laptop arrived in the mail recently and it is quite an amazing little machine. Conclusion up front: I'm extremely satisfied with it and in some ways this laptop computer is better than ones that sell for ten times the price.

You might recall from a previous post that I had downloaded the XO's operating system and taken it for a test drive in an emulator. Now I have the real thing in front of me, and it's safe to say that it is even better. After all, some of the most innovative features of this computer are in the hardware. My favorite feature is the screen. It is viewable in direct sunlight which makes it usable outdoors. Second up would be the wireless networking. The graphical network selector is fun to use and the connection tends to be more reliable than any of the other computers I've used with my home wireless router. The battery life is also impressive, easily five hours on a charge of its small battery. It even has a built-in camera and microphone.

It runs all of the software I need too. I used the instructions I wrote up when I installed firefox on the emulated operating system. Everything went smoothly and I was browsing the web using firefox in a few minutes (The XO laptop comes with a perfectly good web browser, but I wanted to use my favorite plugins and have more control over downloads).

I'm quite taken with the little machine. I've been using it as my primary computer at home, using it for all of the tasks I normally do (mostly browsing the web and programming). There are a couple of things that I would change if I had the chance. The first is the keyboard. It is an interesting design, made of a flexible rubber-like substance, and it works much better than other flexible keyboard that I've tried, but it took a while to get used to the shift key (I have to press in the corner of the key). The other difficulty is presented by the slower processor, but it doesn't get in my way most of the time. The only time I notice any slowness is when playing flash videos (like on YouTube). Perhaps part of the problem is flash for Linux, but I'm not sure. In any case, I don't really mind as I don't watch that much video, and if I want to, I have other computers that I can use.

It will probably come as no surprise that I wrote this post using the little green computer. I'm saddened by the end of the "give one get one" program, as I think there is still the opportunity for more people to donate and receive their own XO. If anyone is interested, it might be possible to order a batch of one hundred or more through the "give many" program.

Tuesday, March 11, 2008

BusyList

Andy and I started work on a simple little open source project for tracking tasks; it's called busylist. We wanted to experiment with Ajax, Python, and web service APIs, so we whipped up a basic system in a few hours. There is still quite a bit of work to be done, but it has been a great learning experience so far. An extremely alpha test version is available in subversion along with some instructions on the project's wiki pages. If you're interested, feel free to check it out (pun intended) and contribute if you like. It is an open source project after all.

Tuesday, March 04, 2008

(Portable) Ubuntu for Programmers

I've been trying over the past several weeks to find the best fit for Linux on a USB pen drive so that I can boot into my own operating system and get to my files no matter which computer I'm using. As you might notice from my other posts, I tend to spend quite a bit of my computer time in programming and browsing the Web, so the things I'm most interested in are a web browser (Firefox), support for wireless cards in several computers, and a variety of command line programming tools (gcc, python, vim, etc.). It should be possible to take one of the standard Linux distributions and install it on a USB drive (provided the drive is large enough), but I wanted to use a one gigabyte drive that I had, and with my simple needs I should really be able to get all of the necessities in under one gig. Along the way I've tried Puppy Linux, Slax, Feather Linux, DSL, and others, but I decided in the end to roll my own solution based on Ubuntu.

I'm a big fan of Ubuntu, but the standard desktop install is far too large for installation on a one gig drive. For a while I was using the live CD booting from a pen drive with a partition for my files. I used the instructions I found on Pen Drive Linux to set up the pen drive with the image from the live CD (only 750 megabytes). The only problem with this set-up was that all of my files were in a seperate partition and my home directory was wiped out each time. Since many Linux programs store settings in your home directory, this turned out to be a bit incovenient. I tried a few different options, but finally decided to go with a stripped down Ubunutu foundation and add the things I wanted.

I began with Ubuntu Server 7.10 and installed it on my USB drive using some of the recommendations in the installation instructions for low memory systems. During the installation process I selected guided partioning and I did not choose to install any of the software configurations in the "software to install" menu. After installing, I rebooted and added the following packages using sudo apt-get install:
lynx (optional)
screen (optional)
gcc (optional)
xorg
x-window-system-core
firefox
If you are using a laptop, you will likely want to install the following modules:
acpi
acpid
With the above installed you can check the battery's charge, remaining time, etc. by running acpi on the command line. For the graphical desktop window manager, I chose iceWM. I installed it by adding:
icewm
iceconf
icewm-themes
In the past I've worked quite a bit with Fluxbox as a window manager, but it seems like iceWM is easier to configure, especially under Ubuntu. The liQuid theme looks quite nice.

This set up boots into a text only command line mode because it is based on Ubuntu Server, to enter graphics mode, you simply run startx. I connected to my wireless network using wpa_supplicant and running iwconfig.

One of the benefits of working on a lightweight system on a flash drive is the bootup speed. In twenty seconds the computer boots from a cold start, connects to my wireless network, and enters the graphical desktop. I'm quite happy with my little portable operating system, and you probably won't be suprised to hear that I wrote this post using it.

Friday, February 08, 2008

Firefox 3 Beta 2

I recently downloaded the second beta of Firefox 3 from Portable Apps. I didn't want to replace the version of Firefox I already had installed, so I used the version from Portable Apps which runs as a standalone binary. Sometimes it's really nice to unpack a program without touching the registry or worrying about installing.

Overall, I'm very happy with the changes I've seen in the latest version. I had heard that there have been some improvements to the JavaScript engine in this version, and they are noticeable. When I logged in to gmail it seemed a bit more responsive. I have to say though that my favorite changes are in the address bar. When typing the address, the address bar shows addresses, titles, and logos for pages that you've already visited. Firefox 2 did this too, but I think 3 gives more detail and a larger number of results. I found myself using it much more often than in 2. Part of the reason is that it shows the most recently visited page at the top instead of the shortest match. I also liked that you could star a URL to add it to your bookmarks, and you can even select a folder and tag the URL from within the address bar.

Friday, December 21, 2007

An open source JavaScript library: q12

I've written some JavaScript utility functions as part of my ongoing wiki/note taking application. It is growing into a full fledged Ajax application with a web server and now a minimal Ajax library which I've decided to release as a separate open source project.

From the project's front page:

I found myself needing a few common utilities as I was writing an Ajax application. Rather than use a heavyweight or verbose library, I wanted something compact that minimized the amount of typing I needed to do. This is where the name q12 comes from, just three little keys up there in the upper left corner of the keyboard.

This library provides functions for the following:

  • Basic DOM manipulation
  • Asynchronous HTTP requests with callbacks
  • Class methods and inheritance
  • Base64 encoding and other forms of data escaping
  • AES encryption
Writing your own Ajax library is also a great way to learn JavaScript (IMHO). I'll be making little tweaks as I work further on my project, it's getting quite close. I think I've probably said that before but rewriting from scratch tends to set one back a bit. Third iteration's the charm?

Monday, November 12, 2007

Fluxbuntu 7.10

Long time readers may remember the saga of my old laptop (a Compaq Presario 1700T circa 2000). I had ditched openSUSE several months ago in favor of Fluxbuntu, a variant of Ubuntu which used the light weight Fluxbox windowing system in place of the Gnome desktop. My old computer has only 128 megabytes of RAM, so memory is at a very high premium. With the release of Gutsy Gibbon, Fluxbuntu picked up a few new features, so I upgraded and gave it a try. I have been extremely pleased. All the good stuff is there that I enjoyed before (installing new free software using Synaptic or Aptitude, Firefox, XMMS, etc.) but there were a couple of great new additions. For one, automounting of USB drives. I have a small pen drive that I carry around to hold many of my files: music, programming projects, etc. Mounting had always been a bit of a pain with my laptop's OS. Now I just plug it in, it mounts, and an icon for the drive appears on my desktop.

The discovery came at a perfect time. My in-laws decided they wanted to resurrect an old PC so they could browse the web side by side. All they really needed was Firefox, Open Office, and Picasa and their computer had the same amount of RAM as my old laptop (probably from about the same time frame). Fluxbuntu to the rescue ;-) Can you believe it, my in-laws are running Linux.

Tuesday, January 23, 2007

What is ogg vorbis?

I'm glad that you asked. Ogg Vorbis is a format for music files like MP3. In many ways, I think that Ogg Vorbis is better. It produces higher quality output at a lower bit rate than MP3's. This means Ogg Vorbis files sound better and are smaller than MP3 files. In addition, Ogg Vobis is an open format and it uses open source software. This means that if you wanted to create an Ogg Vorbis player, or write a program that uses Ogg Vorbis, you don't need to pay any licensing fees on the technology. The MP3 format is patented by a German company (Fraunhofer Society) and they charge licensing fees to use it.

So, why isn't it more popular you may ask? I think it all comes down to timing. Ogg Vorbis was introduced much later than MP3, and several MP3 codecs have been released as free software for individual use. I'm hoping that Ogg Vorbis will gain momentum and eventually win out.

If you are interested in free and open audio compression you might also want to check out the Free Lossless Audio Codec (FLAC). The is no degradation to the quality of the sound and it is free and open like Ogg Vorbis.

Sunday, September 03, 2006

Reviving my laptop

I have an old laptop which I would hate to see go to waste, an Intel 796 megahertz processor with 128 megabytes of ram and the weight of Windows XP has become too much for it to bear. I want a system that will run quickly and smoothly. I need a web browser and programming tools (gcc, make, python, perl, svn, etc.) and an mp3 player might be nice too. I had been running OpenSUSE, but the performance was still a bit sluggish. Then I tried Damn Small Linux (DSL) and it had almost everything I need. Fluxbox is a great windowing system and it ran extremely well. Things started to break down when I tried to install make, a series of dependencies and library downgrades prevented me from being able to get everything I needed. The problems continued the more I tried to modify the system. So I've tried others, five distributions so far, but none seem to work just right. This is turning into quite the weekend project.