Hugh Winkler holding forth on computing and the Web

Friday, December 12, 2008

Flying the F-22

My gut reaction to the internet chatter about distributed version control systems, like Git, was, "What advantage could it possibly offer us, that would justify me and developers spending so much time re-learning how to manage source code?" CVS kinda sucked, but was good enough, and Subversion was better... what more could you want? It's just version control.

But we had a branching requirement, and a customer requiring a branch we'd maintain for a few years.

From the Subversion manual:
Tracking Merges Manually

Merging changes sounds simple enough, but in practice it can become a headache. The problem is that if you repeatedly merge changes from one branch to another, you might accidentally merge the same change twice. When this happens, sometimes things will work fine. When patching a file, Subversion typically notices if the file already has the change, and does nothing. But if the already-existing change has been modified in any way, you'll get a conflict.

Ideally, your version control system should prevent the double-application of changes to a branch. It should automatically remember which changes a branch has already received, and be able to list them for you. It should use this information to help automate merges as much as possible.

Unfortunately, Subversion is not such a system....It means that until the day Subversion grows this feature, you'll have to track merge information yourself. The best place to do this is in the commit log-message.


OK! So, relearning version control might not be a bad idea!

Git is all about changesets, and it can merge changes from branch to branch painlessly. A few months ago, we imported a year or so of changes from SVN to a Git repo (our tree structure was way too f***ed up for git-svn), and never looked back.

I'm a Piper Cub pilot in the cockpit of a F-22 Raptor. There are a lot of switches. I kind of understand some of them. I'm afraid to touch most of them. Every now and then we'll try a new trick -- hey, I think I can just push this changeset to you, from the branch I'm working on to yours... SHIT, THAT WORKED! We haven't crashed yet, and we have the rumbling sense of untapped power.

For reasons unrelated to version control, it turned out we could manage the customer requirements without a long running branch. But we're not going back on Git.

Friday, November 21, 2008

REST Blessed

Gartner has blessed REST architecture by giving it a three letter acronym and a $495 report. But you can get the gist of the paper for free in Nick Gall's blog post: WOA: Putting the Web Back in Web Services.

Commercially, it's a big help that I can now point a customer's solution architect to a Gartner report, rather than to a guy's thesis and 20,000 posts to the REST-discuss mailing list. I hope that doesn't sound cynical -- it's not. It's just that architecture arguments are hard to win just on the merits. You need a little mindshare too. Thanks, Nick!

Thursday, October 23, 2008

Electronic Vote Fraud

I think I know how we can use electronic voting to increase our confidence in vote counts -- as opposed to what we have.

What we have is a mad world where you can construct the most extreme hypotheses about what is happening to your vote, and they could be true. You can't prove they're not.

When I complete my vote, I want to receive a printed receipt with a) my vote in clear text, b) a hash of my identity, and c) my digital signature of the above, for non-repudiation.

I should be able to log onto the county's web site, and verify that my vote was recorded for my candidate:










VoterCandidate
38907389343666McCain
84567867435432Obama
45878963063630McCain
92573532235672Obama
......


Anybody can look at this table and count the votes for McCain and Obama ,and confirm that they match the announced totals. They can confirm, too, that their own vote was counted for the proper candidate.

My voting receipt matches one row of the table. The voting machine prints it out, and I review it. I should be able to confirm the hash of my identity by some public algorithm. It could be a hash of my voter registration number plus some secret password I make up.

My receipt also has a digital signature that I apply only after I review its accuracy. Yep, I voted for that guy. I insert the paper into the machine and it prints a string that's the digital signature of my identity and my vote. That way, I can't claim the machine got my vote wrong -- I reviewed the clear text vote, and signed it.

Voting this way would be lots more verifiable than paper ballots and hanging chads. You'd be able to verify your vote was counted.

Since early voting began Tuesday in Travis County, Texas, the email chain below has grown virally (local news story). And how can you prove these fears are wrong?

From: redacted
Date: October 22, 2008 11:48:50 PM CDT
To: redacted
Subject: DOUBLE CHECK YOUR BALLOT before pushing VOTE

no matter who you are voting for:
A friend of a friend of mine has this horror story about voting this morning
in Austin, TX.

He voted at Fiesta Mart on 38th and IH-35.

He voted a straight Democratic ticket.

When he was reading the 'voted for' listing at the end of his
ballot, all of those listed were Democratic candidates EXCEPT FOR
PRESIDENT. The list showed that he had voted for John McCain!!!

And he voted the straight Democratic ticket.k

He reported it to the election official and that person was as
shocked by it as my friend. They corrected the vote BEFORE he hit
the CAST BALLOT button.

My friend said the experience made him sick to his stomach. He said
he was the youngest person in the voting area and all he could
think is that the older people around him may not proofread their
ballot before pressing the CAST VOTE button. They may believe that
they voted for Obama but the voting machine may have registered a
vote for McCain.

He called the Travis County voting office and they said they would
look into this. When he called me I gave him the telephone number
for the Democratic Party and he then called them to report what had
happened and they said they would look into it, also.

Please PROOFREAD your ballot choices BEFORE hitting the CAST BALLOT
button. This is vitally important. (For EVERYONE who votes,
whichever way you vote.)

Friday, October 10, 2008

Leaps

Dear Lazyweb,

1. In 1929, how many Wall Streeters actually did leap to their deaths from their office windows? I'd be surprised if it were as many as 10. Growing up during the sixties, I had the impression it was raining stockbrokers on Black Tuesday; but now I find it hard to believe.

2. They had names. Who were they?

3. Why did they kill themselves? In 1929, was there a greater sense of personal responsibility for the margin calls they wouldn't be able to make? A greater fear of the shame that would attach to them personally?

4. Has any Wall Streeter killed himself or herself during the current financial crisis?

5. Have any of the current super wealthy attempted to guarantee the system using their personal wealth, as Rockefeller did in 1907?

I pray the answer to 4 is no.... yet I wonder about what level of personal responsibility we should expect, and why it's different than 1929 and earlier.

Sunday, September 14, 2008

Dell Precision M2400 + Ubuntu Hardy

Welcome, Googlers. You are here because you have put Ubuntu Hardy (or perhaps Intrepid) on your shiny new M2400 (or other Precision laptop). And you've run into issues with your wireless; to wit, it don't work.

I've got the nVidia Quadro 370M, and Intel wireless 5100 cards. A fresh install of Hardy on a Dell Precision M2400 supports the nVidia (howto). But the 5100 wireless card support is non-existent.

To get both of these devices working, you're going to have to build a new kernel, and also compile a beta nVidia driver interface. It's actually an automated and fairly bulletproof process, so don't lose your nerve.

I think there are some other approaches that could work. I tried the compat-wireless strategy -- it's a technique using the recompiled Intel 4965 driver plus new firmware (oh yeah... you're going to need that 5000 series firmware from the Intel site anyway, so go get it). That strategy actually worked... as long as I ran "make load" on every boot... and occasionally my keyboard would freeze, requiring a reboot. I never could get that approach to feel stable, and my googling left with the impression that nobody had taken it much further than I had.

But to focus on the strategy I got working:

The problem is that the new Intel iwlagn drivers are part of the very latest kernel. I do not believe they are in even the Ububtu Intrepid kernel, so I did not attempt to promote to that alpha version of the distro. You can google to find lots of people dist-upgrading to Intrepid only to find they still don't have support for the 5100.

But there is support in the 2.6.27-rc6 kernel, the latest stable development pre-patch.

Build the 2.6.27-rc6-ultimate kernel

If you have X + nVidia set up the way you like, now would be a good time to save a copy of /etc/X11/xorg.conf.

You want to build this kernel the Ubuntu way. I suggest you follow the command line instructions on that page, and avoid the KernelCheck GUI program. I tried KernelCheck first and it is a really handy way to build a kernel. But it seems not to have left artifacts you will need later, when you build the nVidia driver interface. I admit there could have been pilot error in my case -- but I know the command line method works correctly.

When you get to the "make xconfig" step, choose Find from the Edit menu and search for "iwl". Turn on all those options.

Don't forget to select 64 bit options.

I emulated KernelCheck's naming pattern and identified my kernel with the name "ultimate", so my make-kpkg command was
make-kpkg --initrd --append-to-version=-ultimate kernel_image kernel_headers modules_image
So my uname -r is "2.6.27-rc6-ultimate".

Follow the instructions all the way and install your kernel. Don't worry, your old kernel is still there and in case of trouble, you can always choose it in the boot menu.

Remember that firmware you downloaded above? Put it at /lib/firmware/2.6.27-rc6-ultimate/iwlwifi-5000-1.ucode. Then reboot, and you'll have wireless... but no nVidia.

Build the nVidia driver

When you reboot into your new kernel, the system will drop you into a default 800 x 600 X session. Don't attempt to configure X yet if it prompts you.

There's no pre-built nVidia driver for this kernel. I also found that only the latest beta nVidia driver interface compiled against this kernel.

In order to build and install the driver, you need to kill X and make it stay killed. Ctrl-Alt-F1 to get a console, then "killall gdm". Then, following the instructions on the nVidia site, run the install script you downloaded. It should even edit the xorg.conf file to install itself, but in my case I still had to go back and set the resolutions -- I just replaced xorg.conf with the one I had saved at the beginning.

Our only departure from the Ubuntu way was using the nVidia install script -- it puts nvidia kernel module in a place of its choosing, and it won't be managed by apt for us. But we're not going to be getting new kernels from apt anyway -- we built our own, and we're now going to take responsibility for upgrading kernels (as needed) and nVidia at the same time.

Tidying up

You want to be able to suspend and hibernate, and it just takes a couple of edits. To hibernate, you'll need as much swap space as you have RAM. With this computer, unlike my last one, I've done that, and hibernate now works after the edits.

The NetworkManager annoyingly asks you to enter your keyring password whenever you log in. Googling around, I found some solutions. One of them worked for me. I'm not sure I used the best solution, so I'm punting and letting you find your own answers.

Hey -- this backlit keyboard is pretty nice!

Update Sept 17: Using the Hardy config file as your initial config (config-2.6.24-19-generic), simply enabling iwl* as above leaves you missing two modules that Ubuntu installs only as part of the linux-ubuntu-modules package. Your sound card and webcam require extra modules. In make xconfig, be sure also to select "USB Video class" (if you have the webcam) and "Intel HD Audio".


Thursday, July 24, 2008

Check your DNS

Dan Kaminsky has published a widget you can use to test your DNS server for the cache poisoning design flaw. You know, the one that allows a malefactor to send your PayPal requests to their own fake servers? The one that doesn't require any vulnerability on your computer, just the standard, unpatched DNS server that you use?

Both my home and business ISPs failed the test. I followed Dan's advice and pointed our routers to OpenDNS. I guess it's a good day to be an investor in OpenDNS. Except I can't figure how they're going to execute on their plan to serve me ads via DNS. Anyway, this is their moment of glory, and they seem to be holding up under the load. If Dan has any stock in OpenDNS, that would be clever of him. I am not suggesting he does. Unless it turns out that he does. Then in that case, I am suggesting it retroactively, and don't say I didn't tell you so.

Tuesday, July 01, 2008

Get with it, Pops

Dare gives evidence that older Googlers are coming away dissatisfied from the experience.

Here are a few disapproving quotes:
  • It was obvious that they do not care that I had 12 years software engineering experience
  • orientation towards cool, but not necessarily useful or essential software
  • don’t have a career path for their employees
  • There is no legacy code
  • As all organizations mature they tend to add PROCESS
Isn't this the static vs dynamic argument? Old models of software development vs new? On premise software vs software as a service? Will legacy code even be important at Google? Shouldn't our goal be disposable code?

Aren't perspectives like this the reason Microsoft is perceived as a falling star while Google is a rising one?

Saturday, June 28, 2008

Asynchronous HTTP POST

I've got to process a huge POST asynchronously. If it were small enough I'd just return 201 Created, with the URL of the new resource in the Location header. But this is a massive file upload that requires a bunch of processing, and checking, before I can create the resource. It takes so long to process, your TCP connection will time out.

202, right?

This is a job for 202 Accepted, right? I had always understood 202 was how to respond ansynchronously to a POST. I return some hypertext with a link you can follow to see the status. You follow that link, load the status page, and hit refresh until it shows "100% done" and displays yet another link to the resource you created. That's what the RFC says to do.

In a machine to machine case, e.g. Atompub, I have to define meaningful content types so that clients can follow the hyperlinks to learn how the POST came out. If you're thinking of using the Location header: It isn't blessed for 202; even if it were, we couldn't guarantee that the resource at the returned URL will ever exist.

Why not 303?

I prefer to respond with 303 See Other. Clients follow that Location header to a status page, eliminating the extra manual step required by 202. Retrieving that page would itself return 202 Accepted, until I've created the resource, or failed to. Then that URL would return 201 Created, or whatever the result would have been in the synchronous case.

This way, user agents just follow the semantics of HTTP, and never need to understand any application entities.

Summarizing:
  • Any user agent will knows to follow the 303 to the status page automatically. This is the URI giving "the response to the request", which is a status page. Any time you want the response to this POST request, go to that URI.

  • GET on that status URI, for some time, returns 202 and an entity giving "an indication of the request's current status".

  • Finally, at some time when you check that URI, you get the final "response to the request".
The surprising part might be encountering 202 or 201 in response to GET. But nothing says you can't, and in fact that is the "response to the request".

(Ben Ramsey discusses returning 202 from the POST. It gets a little messy.)

Saturday, June 14, 2008

Ruby: DSL for Writing Programs

Ruby:


PEOPLE = [{:first=>'scott', :last=>'guthrie', :age=>32},
{:first=>'susanne', :last=>'guthrie', :age=>32},
{:first=>'bill', :last=>'gates', :age=>50}]

def avg_age
guthries = PEOPLE.select {|p| p[:last] == 'guthrie'}
guthries.inject(0) {|s, p| s + p[:age]} / guthries.length
end



Common Lisp:


(defvar +people+
(list
(list :first "scott" :last "guthrie" :age 32)
(list :first "susanne" :last "guthrie" :age 32)
(list :first "bill" :last "gates" :age 50)))

(defun avg-age ()
(let ((guthries (remove-if (lambda (p) (not (equal "guthrie" (getf p :last)))) +people+)))
(/ (reduce #'+ (map 'list (lambda (p) (getf p :age)) guthries)) (length guthries))))



These snippets calculate the average age of the people having last name "guthrie". The functions are both two lines long, but the Ruby one is more readable.

S-expressions give Lisp powerful ways to build code in code. But writing programs using S-expressions seems cumbersome compared to Ruby's syntax. The syntax of Ruby shortens the code you have to write to do everyday tasks. It's like a domain specific language for, er, writing computer programs.

You give up some programming power -- a fair trade if you rarely need that power.

Thursday, May 29, 2008

Self-appointed Guardians of Truth

SiteTruth has given my company's web site a big red do not enter sign.

Even though we don't sell products electronically, they wish we had a certificate. And we've never put our office address on that site -- an oversight when we moved, not a scam. So they give us a big red "Do Not Enter" sign, indicating our site is dangerous to enter. From their "about" page:

Every on-line commerce web site must display the name and address of the business behind the site. That's the law in much of the developed world. SiteTruth tries to identify that business, then find information about it. That check is used to influence search rankings. That's SiteTruth. (emphasis mine)


We're not an "on-line commerce web site", but their system can't detect that, so they're selling technology that will lower our search rankings?

We'll fix our site to please them, of course; why not? But their technology doesn't seem to increase the safety of the web. And is likely to piss off other legit site owners, some of whom may even feel litigious. Could you blame them?

Friday, April 25, 2008

Hardly any near death experiences

The Hardy upgrade went... ok. Better than the upgrade to Gutsy, at least --the initial boot this time at least got me to an X, if only in 600 x 800 mode.

So you don't have to read to the end, here's my helpful hint to you this upgrade cycle. If you're having trouble with your nVidia: after you upgrade, use Synaptic to uninstall your old restricted drivers packages (in my case, for kernel 2.6.22), and select the new ones (for kernel 2.6.24) which, in my case at least, were not selected after the upgrade.

Only after you do that, and reboot, can you see the new nVidia driver in the System/Administration/Hardware Drivers applet. This used to be called the Restricted Drivers Manager and all the online docs still refer to that. But Hardware Drivers is what you want. Go there and do what comes naturally.

Last night I was finally able to get Update Manager to connect and start downloading files. I let that run overnight -- it took 6 hours or so to download everything, presumably because of the tremendous load on the U.S. Ubuntu server.

This morning, I awoke to find it paused in a dialog. Just a warning that it was about to clobber my modified mime.types. I answered OK. It proceeded a little, displaying the progress in a little terminal window. I noticed some interesting progress output, and started editing some notes. At one point, I selected some text in the terminal window, and copied it. Using Ctrl-C. Who would have thought the goddamn terminal window was accepting keyboard input and processed the interrupt. So immediately I get three alerts warning that it could not complete the emacs/ede/eieio installations becuase it had received an interrupt. Yesterday I predicted the emacs upgrade would fail, but I didn't mean to fulfill my own prophecy.

It seemed to continue from there almost without trouble -- but at the end it complained it could not upgrade the update-manager. Ironic, isn't it? And at the end of the install it displayed a scare alert: "Your system may be unusable". Nice. Bravely, I rebooted, and found myself in 800 x 600, but at least with an otherwise stable system.

After some flailing I discovered the secret sauce to getting nVidia working again. And all systems are now go!

I wonder how Mark Pilgrim's mom and dad are doing?

[updated to fix link]

Thursday, April 24, 2008

Ubuntu Hardy is out and I'm a sucker

I managed to hold off about 6 hours before giving in and pressing the "Upgrade" button to get Hardy Heron. Yes, I'm in the middle of a project. Yes, I expect I'll have nVidia issues. My emacs will probably fail upgrading as usual. But: it's too tempting. I'm a sucker. I know it! No discipline whatsoever.

Hm... I guess the servers are mighty busy today. My first attempt timed out: "Could not download the release notes/ Please check your internet connection".

Saturday, April 12, 2008

Lisp a casualty of its time

This sad editorial remark from the CLIPS architecture document, describing perceptions about Lisp within NASA around 1984:


...Despite extensive demonstrations of the potential of expert systems, few of these applications were put into regular use. This failure to provide expert systems technology within NASA’s operational computing constraints could largely be traced to the use of LISP as the base language for nearly all expert system software tools at that time. In particular, three problems hindered the use of LISP based expert system tools within NASA: the low availability of LISP on a wide variety of conventional computers, the high cost of state-of-the-art LISP tools and hardware, and the poor integration of LISP with other languages (making embedded applications difficult).


Would they encounter the same barriers today, twenty-four years later?

1. Low availability of LISP on a wide variety of conventional computers: Now we have commercial implementations and open source Lisps, working on most computers and operating systems.

2.High cost of state-of-the-art LISP tools and hardware: The hardware issue has faded away since Lisp machines gave way to general purpose computers. And Lisp is no more memory intensive than Java or .NET. Of the commercial Lisps, LispWorks is affordable for normal mortals, and certainly the Allegro products are within reach of NASA. And there are several good, free, open source CLs.

3. Poor integration of LISP with other languages: An issue only if your mindset is 1984. Nowadays, to integrate with a Lisp program, you'd treat it like any other network resource -- like an RDBMS, or a web service.

I wonder what is the state of the Lisp renaissance within NASA?


[updated to expand remark on item 2]

Monday, March 03, 2008

Rule of Least Power: Bah!

The Rule of Least Power, a W3C TAG finding, posits: "Powerful languages inhibit information reuse." They're observing that it's easy to scrape documents written declaratively using HTML. The problem with using more powerful languages like Javascript, they say, is that "you typically cannot determine what a program in a Turing-complete language will do without actually running it."

So? As long as the output is a DOM, just run the program and inspect the DOM.

You already have to use a good HTML parser, right? Now, just run all the script elements on the page too -- obviously, in a restricted environment.

I'm sure Google and friends must do this. They're not going to leave valuable information on the table.

Tuesday, February 05, 2008

Yet another reason to use NoScript

Niall Kennedy tells us all how he grabs your browser history "for improved user experience".

1. put links on the page to all the interesting sites you want to identify in the user's history. Have they visited Google? Have the visited certain porn sites?
2. put some css for those links that colors them some weird color when they have been a:visited.
3. add some script that crawls the dom looking for links styled that color.

VoilĂ .

Yet another reason I'm glad I've turned off most scripting using NoScript.

[Updated to remove a made up porn site domain name that actually is a porn site domain name.]

Monday, February 04, 2008

Eli can't get no respect

[Update Feb 5: they've fixed the headline, but I just noticed the caption at the right still says "Super Bowl XLI". So that explains it. To build yesterday's Super Bowl results page they just grabbed last year's. Then they attempt the replacements s/Peyton/Eli/g and s/XLI/XLII/g. Probably they left off the 'g'!]

One Manning's as good as another to the British:

Sunday, January 20, 2008

Clojure

Clojure is a Lisp that runs on the JVM. I agree with everything on this page describing the design rationale. Objects are overrated; functional and immutable are good. I can't wait to get a breather and figure out whether and where Clojure fits into our technology arsenal.

The real measure will be how well it can leverage the Java platform. There are plenty of great languages, but they have inconsistent support for, say, sockets, or window systems. Can I override a protected method of a Swing base class? Can I connect to an HTTPS server? JRuby has knocked Java integration out of the park, so you get all that capability for free, plus a powerful language.

Update: I think the bullet point "Extend Java in Java, consume Java from Clojure" on the rationale page suggests you can't easily write a Swing app in Clojure. Shoot.

RIA using JRuby and Web Start

I went dark over the last eight weeks, building a rich web application to integrate with our product. It's been years since I built a desktop GUI application, but I was pretty good at it once. It's not always a lot of fun. But some things a man just has to do himself.

The application does tons of vector drawing.... it draws hundreds of thousands, even millions, of line segments. Yet it needed to be responsive... scrolling, zooming, panning. It's data intensive, too. It downloads dozens of megabytes of data for each view. I needed to develop this program fast, so I needed a programming system and platform allowing me to iterate and adapt to new requirements recursively revealed. I preferred a declarative graphics language where possible, and for any procedural code, a powerful dynamic language. Ideally the program should run on all OS platforms; but the business could probably tolerate a Windows only solution, if that offered superior results. I made tradeoffs among these requirements.

I was not constrained by download footprint, a concern which eliminates Java from consideration in lots of other scenarios. Anyway, that problem is going away soon, they tell us.

So here's the spoiler: I chose Java Web Start + JRuby. And now a short dissertation on how I arrived at that choice:

I looked to browser and browser++ technologies like SVG, Flex, AIR, Silverlight, Java applets/Web Start/JavaFX. (Open Laszlo deserves a look too, but I didn't eval it). It would be great for this app to run within the browser -- that would be the most seamless experience. But using an out-of-browser technology like AIR or Web Start isn't a deal breaker if you do it right.

SVG was very promising. It would be a pure browser solution. I really valued the declarative model and the standards alignment. But it does not work in IE. I also did some performance experiments with Opera, Safari, and Firefox. Drawing polylines of a hundred thousand points or so, Opera and Safari performed well, but Firefox performed poorly. It's conceivable, if unlikely, that I could distribute the app as a FF-only solution -- even if I bottle it up as a a XULRunner app. But the FF performance was inadequate. And distributing only for Safari or Opera is not realistic.

I made a significant evaluation effort with Flex/AIR. Surprisingly I learned that the Flex 2-D graphics model is procedural, not declarative. You can, however, bottle up your procedures and use them declaratively from mxml, the Flex XML language. I felt I was going to spend a significant amount of time learning Flex -- time I needed to spend on my app -- and that I needed the assistance of tools like Flexbuilder. I also became concerned that I was developing a program away from the design center of Flex. Would I encounter design/ performance constraints?

Silverlight performed very well, and it's got an XML declarative model just like SVG, with the power of a bunch of WPF components to boot. I was confident that all the capabilities of the Windows platform would be available to my program. Microsoft distributes Silverlight for Windows and Mac, but not Linux, which they are leaving to Novell. I read that as: No Linux. But Windows and Mac are enough coverage, even if I would be sorry to leave aside my Ubuntu dev machine. My fate on the Mac would be in Redmond's hands, of course. Does Microsoft have my best interests at heart ?

Java applets or Web Start, using Java 2D, I also knew to be capable of these rendering tasks. I'm satisfied that I can make Java applications now that look pretty sweet. The down sides were the procedural approach and the static Java language. It's the 1999 state of the art. Layout code so impenetrable you need GUI builders.

JavaFX deserves a separate mention. The capabilities of the entire Java platform are there for you. Yet you program in a declarative/dynamic language optimized for building Java 2D apps. Perfect! Unfortunately, it's pre-alpha. Sun hasn't yet released it under a license allowing you to put your work into a customer's hands.

I wanted the wide reach of Java. I think that the 5% of customers on non-Windows platforms may be more important than their numbers.

I also wanted the enormous hardened Java platform. It's as capable as the Windows platform. You won't encounter many problems you can't address or workaround.

Couldn't I have all that, plus a dynamic language to stitch it all together?

Guess what. JRuby is stable and 1.x. It's dynamic, yet it compiles on the fly to JVM byte codes. It's got great Java integration, meaning you can invoke any Java API. Profligacy is a terse little Ruby lib for stitching up Swing components. JRuby's just a jar I download with my Web Start app, and launch with the main Ruby script from my Java entry point. The rest of the program is in Ruby, occasionally calling into Java libraries. No, there's no declarative GUI language here. It's one of the tradeoffs.

My work environment: Fire up NetBeans for its good Ruby editor. In a command window kick off jirb and execute a script to load and run my main code. Edit code and reload it into the running program in jirb. Lather, rinse, repeat.

I would argue that JRuby accelerated my application's performance, because I iterated and applied two major architectural optimizations that would have introduced impenetrable spaghetti in Java. The bottleneck for speed and responsiveness has never been JRuby. It turned out that even Java2D + acceleration could not keep up acceptably with the level of interactivity I wanted. The SVG or any other pure declarative approach would have failed (although I think JavaFX's scene graph might be able to optimize the drawing). I had to go in and surgically rearrange the innards of the program. The changes were effective but had low impact on stability, thanks to the flexibility and power of Ruby.

If Sun can focus more energy on these dynamic languages built on the JVM, that would be a pretty powerful story: The deep platform capabilities, the broad reach, the rich development environment.

Tuesday, January 15, 2008

And Speaking of Paul Graham...

Arc to be open sourced this winter. It's going to be hard to top the expectations.

Monday, January 14, 2008

The Truth about Lisp

Majorly funny. Leon Bambrick wrote (back in 2006):


If you're good enough to use lisp, you'll soon be frustrated with lisp. Lisp is not an adequate lisp. By the time my bus had made it two blocks I'd written some simple lisp macros that were so powerful they made lisp completely obsolete and replaced it with a new language. Fortunately, that new language was also called lisp. And i was able to prove, mathematically, that the new lisp i'd created was both far superior to lisp in every conceivable way, but also exactly equivalent to lisp in every possible way. I was very excited by this. But also found it very boring....Paul Graham himself was completely written in lisp, by an earlier version of himself, also written in lisp, by an earlier version of lisp. It's lisp, paul graham, lisp, paul graham, all the way down.

Thursday, January 10, 2008

Why objects suck

class Curve {
String name;
float[] data;
Color color;
}

class Plot {
Curve[] curves;
void addCurve(Curve c);
void removeCurve(String curveName)
}

"No, no, no... you're mixing in style information. Pull the style out of Curve and separate the concerns." Oh. OK...

class Curve {
String name;
float[] data;
}

class CurveStyle {
String curveName;
Color color;
}

class Plot {
Curve[] curves;
void addCurve(Curve c);
void removeCurve(String curveName);
void setCurveStyle(String curveName, CurveStyle style);
}

Well, all right then! I'm sure this in a patterns book somewhere. I feel warm satisfaction with the pure, objectified minimum entropy of my factorization. Until I implement Plot.removeCurve... oh shit... I have to search for all the dangling CurveStyles referring to the curve, and delete them. As OO programmers we do loads of that every day. You come to think of it as natural. The best OO languages have list comprehensions that make it easier to do. That's nice, but it's not enough. The problem is that "objects" aren't a logical model.

Wouldn't it be more "natural" for the dangling CurveStyles to delete themselves when no longer relevant? There's a logical model for that, the relational model:

Style
CurveNameColor
MSFTRed
GOOGBlue


Curve
CurveNameData
MSFT1,4,-9
GOOG13,2,22


Logic tells you that the style row cannot exist when the Curve row has been deleted. A foreign key relationship forces the system to delete the style when you delete the curve.

I want my programming language to do that for me, and that's part of what I was wishing for two years ago.

Blog Archive