Messages not Models

Hugh Winkler holding forth on computing and the Web

Monday, October 08, 2012

Finder Not Responding: Lion and Mountain Lion

After upgrading to Mountain Lion, Finder frequently became not responsive. The culprit was the "All My Files" feature in the sidebar. Finder gets hung sorting and displaying the 15 gazillion files on my disk.

Here's how to fix it:

  1. Open Finder Preferences.
  2. Under Sidebar, uncheck "All My Files"
  3. Under General, set "New Finder windows show:" to anything but "All My Files".

I liked All My Files. I hope Apple will put some effort into optimizing this view.

Friday, February 24, 2012

MVC Frameworks. Keep 'em.

I said it here in 2006. MVC on the server is a poor fit for web apps. Here's a more up-to-date critique.

Subsurfr, which I introduced here a few days ago, is an example of what that author calls a "serverless" web app. All the app logic is in the js + html. It uses back end resource servers for maps and drilling data, but there's no PHP/Rails server side logic. That's going to change a tiny bit in the near future to support personalization, but substantially the app state will remain in the hypertext.

Monday, February 13, 2012

Jakob Nielsen: Web will beat mobile apps

Although he thinks that apps offer a better user experience right now, and offers evidence to back that up, Jakob Nielsen officially predicts that web apps will surpass mobile apps.

More ammunition came a short time ago from the Financial Times, who released a rich iPad web app that is every bit as articulate as a native app. The FT's designers discuss that decision here.

The excellent FT iPad web app suggests that native app UX only surpasses web app UX because typical web apps don't put in the effort.

There's very little I find in native mobile apps that I could not do as well in a web app (HTML5 plus concessions to using platform specific CSS and JS calls). In some ways, the UX already is better in a web app. I almost always find that native apps lack some critical functionality that the web app has. And the web app has seamless hyperlinkability, in and out.

Sunday, February 12, 2012

Exploring the Bakken using WebGL

In the spirit of Done is Better than Perfect, we recently unleashed Subsurfr, a 3-D subsurface visualization web app for oil and gas.

We primed it using 17,000 wells from North Dakota. Since the Bakken shale is a very active play, some geologists and engineers are going to be very happy to have this new arrow in their quiver. Here's a fun look at the very busy Bakken from a well planner's pov.

Building this app pushed a lot of boundaries (for us, anyway). First among them was WebGL. And to broaden the reach to IE, we use Chrome Frame. But also, Subsurfr is a mashup, and exercises Cross Origin Resource Sharing (CORS) capabilities to permit it to use data served directly to the browser from a variety of sources. We also experimented with BrowserID, and OpenID + OpenAuth before chucking identity altogether this iteration -- the business driver isn't in place.

WebGL is a lot more mature and stable than I would have expected. After all, the spec has only been finalized in the last few months. But, after you get past the unfamiliarity of the GL computing model (GLSL, vertex shaders, fragment shaders), and the
trial and error nature of debugging (you can't even print to a console from GLSL) , I found our code just works on every WebGL platform, including Firefox on my Android phone.

Although three.js seems to be the weapon of choice for most new WebGL projects, I chose the lower level threedlibrary (tdl). I didn't do a lot of comparison shopping before I made that decision. It turned out to be a good choice. Because tdl is close to the metal, I never ran into any WebGL capability that I needed but could not use. Yet, as a WebGL noob, I needed the structure of tdl -- tdl taught me how to use WebGL properly while I was actively developing working code. 'Cause that's how I roll. I'm not going to spend $50 on a WebGL book and research the 5 top libraries and do a three week test project in each one before coding.

The web pages making up Subsurfr itself are totally pregenerated -- presently about 35,000 of them. 'Course, lots of the data still comes down via ajax to the browser from the cross-origin servers. By serving only static pages, Subsurfr can run on a small nginx server. How the ajax data servers are going to hold up -- ask me in a few weeks.

CORS itself is a minor challenge. First, it does not run in IE < 10. That's not a problem for Subsurfr because Subsurfr already has to use Chrome Frame to run in IE. However, we use MapQuest as our map server, and they never heard of CORS. Consequently, we chose to proxy our XHR calls to MapQuest via our own server which adds the CORS headers. Additionally, we use the dojo toolkit to execute those XHR calls, and dojo has its own issues there. Finally, our own drilling data server had to be retrofitted to send the CORS headers. So we have several workarounds and patches to get all the CORS stuff going. Still beats IFRAMEs.

Anyway -- check out to see some real world WebGL in action. If you are interested in Subsurfr from an oil and gas perspective, I've written more about it here.


Thursday, July 16, 2009

Inflexible, unmaintainable, fragile code in Clojure

I constructed a subsystem in Clojure. We had a nice, compact problem to solve. The rest of our app talks to this Clojure bit through a small interface. The attraction of Clojure was that I could enjoy all the nimbleness of hacking Lisp interactively. Using such a powerful language, I could adapt to emerging requirements, experiment, twiddle, and workaround faster than using Java. And early on, that largely proved true.

But I am here to tell you: It is possible to write inflexible, unmaintainable, fragile code in Clojure.

As they say in bicycle racing, the riders make the race. I never said I was a good programmer.

It's my own fault. I dived into Clojure and learned key concepts on the run. And these key concepts are fundamental to how you should structure your program. FP has a learning curve. Managing shared state using Refs, Agents and STM -- better to understand these concepts up front. Even now, I'm unsure how I'd restructure my code to use them properly. I'm very glad not to have to worry about deadlocks. But there are parts of this code I still am unsure how to to test for consistency under multithread access.

Consequently, I'm slogging through this bowl of spaghetti of my own making, cursing the author.

[Edited to add "fragile" ... as I made one more change and my tests came tumbling down... ]

Saturday, May 30, 2009

Wolfram Alpha isn't sure what to do with your input

I have yet to get any answers to any interesting, real question I have had, from Wolfram Alpha.

Saturday, April 25, 2009

If RMS has his way, the GPL is dead

The GPL has never made me feel particularly free. Only in a kind of newspeak can you say that constraining the kinds of agreements I can make with people who buy my software enhances my freedom .

I'm not saying I don't like and use GPL software, or that the GPL is unfair. Just don't tell me that handcuffs are extra freedom.

Only copyright law enables GPL to constrain my actions from beyond the grave. In a world without copyright, GPL would be toothless. I would be able to use any freely available software I could get my hands on, incorporate it into my own, and distribute only binaries.

Richard Stallman proposes reducing copyright protection to three years for software. And he admits "It would be necessary to prohibit the use of contracts to apply restrictions on copying that go beyond those of copyright." That prohibition would apply to FOSS too, of course. No copyright after three years; no contract allowing the author to constrain your actions after that.

That would mean that you could link to three year old GPL libraries and ship binaries!

Saturday, April 18, 2009

The REST Hypothesis

I'm glad Joe Gregorio frankly assessed Atompub's meager adoption.

Atompub served as an experiment to confirm the REST hypothesis: If you construct distributed systems that look like the world wide web, the world will adopt them broadly and quickly -- the way we adopted the web itself. *

Lesson learned: the world's most successful distributed applications run on the web's architecture. That doesn't mean that by designing around web architecture, you will build the world's most successful distributed application.

Atompub came along at the time a critical mass of thought was building in favor of adopting the web architecture for designing new systems, rather than the ill-named "web services", SOAP and RPC model. Early drafts of Atompub allowed for a SOAP envelope.

The RESTful style prevailed, on its many merits. We got caching, and a resource oriented model, and a small, uniform interface.

But what the real, browser plus HTML web has, that RESTful systems don't, is the user agent. The human in front of her browser. An intelligence that reads and understands the meaning of "Author name" and "Title", and fills in an HTML form using queries against her personal database, stored in her brain.

As Joe put it, the problem is that Atompub clients aren't web browsers.

RESTful systems that aren't web browsers try to substitute understanding of media types for that intelligence. Compare Atom clients to HTML browsers.

An "Atom application" is some sort of content management system: a system that understands the semantics of feed documents. An Atom agent populates <author> and <title> elements. Only machines understand the meaning of <link rel="edit">. It's a world constrained to a narrow range of meanings.

But an HTML application can do anything. A human reads some text next to a form field, labelled "Author name", or "Preferred airline", and enters meaningful answers. If you are writing the kind of content management system where "Preferred airline" is a meaningful concept, HTML might be the way to go. There's no such concept in an Atom feed document. You get to present your own user interface, too.

RESTfully designed systems might profit from superior evolvability, cacheability, and interoperability over RPC systems. They are of the web. But they are not the web. The web is in your browser. The web is HTML.

*The hypothesis was not part of Roy Fielding's thesis. It's a hypothesis that many REST proponents, including me, have deployed.

Tuesday, January 20, 2009

Cheney Wheelchair Horror

Best comment so far:

If only he had a small cat to pet as they rolled him to the car. You know something that he could stroke until he snapped its neck to suck the blood from its spine.

(by "Brian")

Saturday, January 03, 2009

Clojure vs JavaFX Script: only 5x slower!

Chris Oliver compares his JavaFX script to JRuby and Groovy and finds JavaFX 25 times faster.

I failed to get JavaFX compiler running on my Ubuntu, but I easily ran Chris's JRuby 1.1.6 test; so for calibration, my laptop comes in at 3:52, vs 4:22 on Chris's machine. Presumably we can scale results on my machine by 1.13.

Here's my Clojure version of the test:

(defn tak [#^Integer x #^Integer y #^Integer z]
(if (>= y x)
(recur (tak (- x 1) y z)
(tak (- y 1) z x)
(tak (- z 1) x y))))

(dotimes [n 1000] (tak 24 16 8))

It runs in about 50 seconds, or 56 after normalizing to Chris' machine speed. This puts Clojure north of JRuby and Groovy, but still 5 times slower than JavaFX.

If you remove the type hints, it runs in 4 minutes, or about the same time as JRuby -- and about 4 or 5 times slower than with type hints.