I just published the first pre-alpha explanatory text for my project to create a new programming language, Flow. You can read the draft description of the Flow Programming Language in The Flow Manifesto. (I haven’t had time to add pretty pictures there yet, unfortunately, so it’s a bit pithy. Hopefully you’re adept at visualizing directed acyclic graphs in your head.)
Flow aims to solve the multicore dilemma through implicit parallelization while guaranteeing against deadlocks and race conditions. Flow allows a safe form of imperative programming (unlike most strongly implicitly parallelizing languages, which are typically purely functional).
Actually Flow is more like a new and subtly different programming paradigm upon which many languages could be built than it is a single language, though I’m working on a reference language that implements Flow semantics.
Cyanogen has ported my multi-touch code to the Android 2.0 / Eclair API, enabling much simpler implementation of multi-touch apps on top of the new Eclair multi-touch api. If you’re implementing MT apps, trust me, you don’t want to do it without this Java class The Eclair build includes both the official Linux kernel API for multi-touch, as well as changes to the MotionEvent class to support multiple touch points. You can use these directly to implement multi-touch apps on Android without any additional code, but dealing with the huge amount of noise in the event stream on touch up / touch down events is a hard problem. And re-implementing the boilerplate code for pinch-zoom over and over again is pointless, because it’s not as straightforward as you might think (you have to transform back and forth between two different coordinate spaces, etc.). Anyway my code helps with those problems, and to easily implement multi-touch apps on Eclair all you need is the patched version of my MultiTouchController.java class (see update below). (This is the class used in cyanogenmod to implement multi-touch scaling in the browser and the gallery.) This class dramatically simplifies the logic necessary to implement dual-touch scaling (pinch zoom) as well as other dual-touch operations involving the distance between the touch points and their orientation. There is also a *lot* of finicky behavior on current capacitive touchscreens on touch up / touch down events (e.g. one axis but not the other axis will suddenly jump to an ordinate of zero while the coordinate still reflects the correct location). This code takes care of cleaning up the event stream pretty dramatically so you get stable and useful dual-touch information, has lots of useful helper methods and classes, and has a high-speed integer sqrt for calculating the distance between the touch points. Anyway, thanks for porting the code, cyanogen [if anyone's paying attention, it means cyanogenmod is basically now Eclair]. I hope this Java class is useful to somebody — please drop me a line if you use it to implement multi-touch in your own projects. UPDATED [2010-06-09]: Updated code to Android-2.2, added support for 3+ touch points, and moved to hosting on Google Code, see the link below.
MT Controller: UPDATE [2010-06-09]: get it from the new Google Code project here.
MT Demos (should work out-of-the-box on an unpatched Droid or Nexus One): MTVisualizer.apk | MTMapsDemo.apk | MTPhotoSortrDemo.apk Note that MTVisualizer is also posted as a free app in the Android Market.
MT Demos, source code: MTDemos.zip
UPDATED [2010-06-09]: List of applications that make use of the MT Controller — see these for more examples of how to incorporate the MT Controller class into your own code:
- Mickael Despesse’s “Face Frenzy” face deformation app (not yet on the Market)
- Yuan Chin’s fork of ADW Launcher to support multitouch — Yuan says, “I just made use of the backported version by mmin of your MultiTouchController in my fork of ADW Launcher (to implement the pinch zoom), and I have to say it really simplified things (on top of the backwards compatibility)! Thank you! By the way, I will let @anderwebs know about it soon so it will be included in the ADW Launcher in the Market too!”
- David Byrne’s fractal viewing app Fractoid, available free in the Android Market and with source available under the GPL. David is the first person who has emailed me to say he’s using my code in an actual app shipping in the Market, thanks David! Fractoid is really nicely done, the code is clean, and the pinch-zooming works really well. It’s probably the best example of pinch-zoom out there right now because it zooms to exactly the pinched size, unlike the browser that can only zoom in-out by certain increments and constantly re-flows when you’re zooming. Try out Fractoid and let David know what you think!
- mmin’s handyCalc calculator, for pinch-zooming in/out of graphs.
- Formerly: The browser in the cyanogenmod replacement firmware (and before that, JesusFreke) — also possibly other firmwares like dwang5. Now replaced with official pinch/zoom in the second Nexus One OTA for Android 2.1.
UPDATE [2010-02-02]: Google releases an OTA update for the Nexus One that includes pinch-zoom in the three apps that make sense (Browser, Maps, Gallery3D). Unfortunately however, I looked at the multitouch code used in these apps with baksmali — and it appears that all three MT controller implementations are different. Also they don’t zoom around the correct point (the center of the pinch). My multitouch controller does the correct transformation between screen coordinates and object coordinates to get the center of the zoom correct, and makes writing apps like this much easier — they could have saved themselves some time and work
– Please donate if you use and like multi-touch on cyanogenmod (or previously on the JesusFreke ROMs) — it will encourage me to keep working on cool stuff!
I just discovered a really simple way to create a workqueue on Linux that someone else might find useful. If you have a bunch of jobs to run that all require different parameters and that potentially take different amounts of time to complete, it’s difficult to schedule them in a way that makes maximum use of the available cores short of using some sort of batch scheduling system, which is overly complicated for a lot of prototyping purposes. It turns out that the xargs command has builtin workqueue scheduling that is really easy to use.
basic syntax (assuming you want to run the program ‘command’ with one parameter, and that you want to have four processes running at any one time):
echo param1 param2 param3 param4 param5 param6 | xargs -n1 -P4 command
command param1 & # & => background command param2 & command param3 & command param4 &
then when the first command of those four completes, it will run
command param5 &
then command param6 &, etc.
If your command requires two parameters, do:
echo param1a param1b param2a param2b [etc.] | xargs -n2 -P4 command
If you have a quad-core processor with hyperthreading, you could do -P8, etc.
You can also obviously store the params in a file and do cat file | xargs … .
The nice thing about this approach over batch scheduling for prototyping is that if you hit Ctrl-C, it kills all the child processes.
I haven’t experimented yet to find the optimal way to generate a separate logfile for each child process, but I also just discovered PPSS which is a more powerful system for achieving the same thing as xargs, and supports separate logfiles: http://code.google.com/p/ppss/
I hope this is as useful to someone else as it is going to be to me!!
UPDATE 2010-06-07: Ole Tange left me a message in response to this post alerting me to the existence of the project he maintains, GNU Parallel. Looks like an awesome tool.
The Web is buzzing today with the news that Google and Verizon are entering a partnership to release Android-powered cellphones on the Verizon network, with the first two phones being released this year. I suspect there is a lot more going on to this deal than just another carrier signing on to carry Android handsets.
Anyone remember the airwave auctions? A recap: with Analog TV expiring in the US due to federal mandate, the 700MHz frequency band was about to be vacated. This frequency band is particularly valuable because it can travel through thick concrete walls with comparative ease, bounce off interfaces between atmospheric layers to travel over mountains, and travel long distances without significant atmospheric attenuation. Google threw down the gauntlet by offering to place an initial bid of $4.64B for a block of the spectrum if the FCC would enforce four specific openness provisions upon that block of spectrum: open applications, open devices, open services and open networks. Verizon bid more than $9M and won most of the the coveted “C block” and parts of the A and B blocks, and AT&T mopped up most of the rest of the available spectrum. Google seemed to have “lost” the auction by being vastly outbid by Verizon — but really emerged as one of the biggest winners, because they never intended to buy the spectrum, just to have the openness provisions enforced. Google announced their intentions and the openness provision on 20 Jul 2007, they unveiled the Android platform on 5 Nov 2007, and the auction closed on 28 Jan 2008. Verizon (after initially fighting Google over the openness provisions, at least publicly) started displaying a commitment to openness right around the time they won the license block, and that openness has continued with the current press release:
On a conference call with analysts and journalists Tuesday, Lowell McAdam, Verizon Wireless’s chief executive, said the first two Android-powered phones would be available this year. He also said that they will include Google Voice, a calling application that generated controversy when it was rejected for Apple’s device.
“You either have an open device or not, and this will be open,” Mr. McAdam said.
Eric Schmidt, Google’s CEO, hailed Verizon’s data network and scale and said that the carrier’s openness “was, frankly, enormously surprising, given the history and the old-line nature of telcos.”
He added: “In Verizon, somehow, the leadership has decided to embrace a very different philosophy, which works very, very well with the Internet.”
Fast forward 9 months to 25 Sept 2008, and another intriguing piece of the puzzle emerges: a patent, originally filed by Google in March 2007, is published publicly, and claims
A method of initiating a telecommunication session for a communication device include submitting to one or more telecommunication carriers a proposal for a telecommunication session, receiving from at least one of the one or more of telecommunication carriers a bid to carry the telecommunications session, and automatically selecting one of the telecommunications carriers from the carriers submitting a bid, and initiating the telecommunication session through the selected telecommunication carrier.
This is simply one of the most brilliant, industry-disrupting ideas to emerge in a long time, and it perfectly dovetails with Google’s planned openness provisions. The patent applies what Google has learned through AdWords to the economics of mobile communications. Most importantly, this patent indicates Google’s desire to innovate in the wireless space, starting with upending current business models.
One more important point to take into account: rumors have abounded for a long time that Google is building, or planning to build, a nationwide wireless network of some kind. These rumors are pretty unsubstantiated, and don’t fit with Google’s usual patterns of behavior (giving away for free potential income streams just to promote greater use of the Web and therefore greater income from ads: giving away Gmail, PicasaWeb, the Android platform etc., not taking a cut on Android Market purchases, and not even forcing Gmail-over-IMAP users to view ads). The rumors are also overly short-sighted for Google’s usual pattern of world domination: Google is not likely to build just a nationwide wifi network, that would be far too short-sighted in several different ways…
Integral to this agreement is a commitment by the companies to devote substantial resources to accelerate delivery of leading-edge innovation that will put unique applications in the hands of consumers quickly. The two industry leaders will create, market and distribute products and services, with Verizon Wireless also contributing the breadth of its nationwide distribution channels. Consumers will be able to purchase products resulting from the collaboration in Verizon Wireless retail and online stores.
So, in brief, we have:
- Google raises the stakes on an auction of the perfect frequency-band for next-gen wireless communications, and gets the FCC to enforce openness provisions.
- Google unveils the Android open handset platform.
- Verizon wins the auction and expresses its commitment to openness.
- Google applies for a patent on open wireless communications (specifically for playing off wireless providers against each other), indicating a move to start innovating in the wireless space.
- Google has “probably” been interested in building a wireless network of their own, of some kind, for a long time.
- Google and Verizon announce that they are entering into a partnership together to develop next-generation mobile devices, products and services.
All that’s left is to connect the dots: does anyone other than me think this all points much more going on than just the release of Android phones on Verizon’s network?
Hint: when somebody thinks that Google is doing something, they’re almost inevitably right about the fact that Google is doing it, but they’re completely wrong about the scope of it — Google always thinks much bigger than the rumor-mongers can imagine. (Case in point: compare all the gPhone rumors that were out there for years with what Google actually had in mind with Android…)
Please donate to support continued development of awesome features for Android (like multitouch zoom)!
A fairly decent short piece on Singularity University on CNET.