Where's my open smartphone?

The IBM Simon, released in 1994, has been called the first smartphone. But it was the launch of the iPhone in 2007 that brought touchscreen, pocket sized computers to the mass market. Today, over three quarters of adults in the UK own a smartphone. We keep our phones on 24/7, tiny computers running full operating systems, with microphones, cameras, GPS and accelerometers. The possibilities if we control them ourselves are amazing, and the possibilities if someone else gets control are terrifying. So we've got good reasons to want a phone running software that we can inspect and change.

This post, based on a talk I did for Portsmouth Linux User Group, looks at the landscape of free and open source software (FOSS) available for phones in late 2017.

The big two

Before we get into the FOSS alternatives, how open are the two platforms that make up the vast majority of the smartphone market today?


Open parts in green, proprietary parts in red.

Both Android and iOS have some open-source parts. But they require proprietary drivers and firmware to run on most phones. The freedom to modify the software and run your modified version is not readily available to end users. Google's Project Treble, which aims to make updating Android easier, might make it more feasible to run modified versions of Android.

Developers can build apps on the open source Android platform, but many apps also rely on proprietary Google APIs.

With an option in the security settings, Android allows 'sideloading' apps from sources other than the Google Play store. This possibility allows F-Droid to exist: an app store solely for FOSS applications. If you've got an Android device, give F-Droid a try: it's an easy, low risk step into the mobile open source world. Its catalogue of ~1400 applications is tiny compared to the millions in Google Play, but you might discover something interesting.

Now, what other options are there besides the big two?

Commercial offerings

First, what other platforms can you (or could you) buy a phone with?


Geeksphone Peak running Firefox OS. Image by Havarhen via Wikimedia Commons.

Firefox OS was launched in 2011, with a goal of running on inexpensive low-end smartphones, for people in developing countries who may not have a smartphone already. It relied heavily on browser technologies—HTML, CSS and Javascript—for both applications and the system UI. That's no surprise for a platform from Mozilla. A number of devices were released running Firefox OS, but in 2016 Mozilla decided to refocus its efforts elsewhere. A fork called KaiOS lives on, but there's little information available about it.


Nexus 5 running Ubuntu Touch. Image by Vinodh Moodley via Wikimedia Commons.

Ubuntu Touch was also launched in 2011. Spanish manufacturer BQ and Chinese Meizu made a handful of models running Ubuntu Touch. But it's best known for the Ubuntu Edge, a wildly ambitious crowdfunding campaign for a premium smartphone which could be plugged into a monitor and keyboard to run a full Ubuntu desktop. Despite thousands of eager pledges, it didn't get close to its $32 million goal, so people got their money back and the project was cancelled. Canonical kept promoting Ubuntu Touch until earlier this year, when they handed it over to the UBports community project.

Next up, there's... these things:


Mobile OS family tree, by Semsi Paco Virchow on Wikimedia Commons.

Nokia used to make remarkably open 'internet tablets' based on Maemo, and then one phone using Meego, before they went all-in on Windows Phone in 2011. We can only wonder what might have been if they'd stuck with their Linux platform.

Samsung's branch of this family is now called Tizen. Samsung have used it in several smartwatches, and a line of smartphones sold in India, such as the Z2. I'm not sure that Samsung is any better a guardian of a free and open platform than Google, but they might put a small dent in Android's market dominance.


Jolla smartphone running Sailfish OS. Image by Herman on Wikimedia Commons.

The other branch I want to point out is Sailfish, maintained by Jolla, a company founded by ex-Nokia employees. Jolla produced a couple of devices themselves, then turned to a licensing model. A few other companies have now released Sailfish phones. However, Sailfish OS is not completely open source; this Reddit post describes which parts of the stack are open.

DIY options

Nothing you can buy today could really be called an open smartphone, as far as I've found. So what can we install ourselves?

LineageOS, formerly called CyanogenMod, is probably the most popular alternative Android distribution. It has support for quite a lot of phones, but far from all of them. If you've got a supported phone, installation looks like this—not too complex, but you'd need quite a bit of confidence to try this on your only phone.

Android ROMs like Lineage are often used together with Open GApps, which provides the proprietary Google applications that are bundled into Android phones when you buy them. I'm not sure I'd call that 'open', but it's undeniably useful for many people who rely on Google services.

One other Android distribution I'll mention is Replicant. This is a strictly free software distribution, avoiding proprietary code as much as possible. However, it only supports a few devices, all of which appear to be older models. F-Droid's collection of open source apps (see above) is a crucial companion for a distribution like this.


XFCE and Wayland running on a Galaxy Nexus. Image by Drebrez, via PostmarketOS wiki.

PostmarketOS is a new non-Android based entrant, which aims to keep devices running updated GNU/Linux for ten years. It can already run on more than 30 devices, which sounds impressive, but in most cases key features like audio and cameras are not working yet. We have to hope that this can evolve into a useful phone platform.

A new hope

I'm writing this a few weeks after the Purism Librem 5 crowdfunding campaign comfortably met its $1.5 million target. Purism, a company that already sells open laptops, plans to make a phone that's as open as possible, with hardware switches to disconnect parts like the cellular modem, which probably has to run proprietary code. They even say that, like the unrealised Ubuntu Edge, you will be able to plug in peripherals and run a full Linux desktop from it. The estimated shipping date is early in 2019.

The Librem 5 is currently the hot new thing in the open phone world, but there are already some concerns about it. At $600, it's priced like a premium device, but the proposed chips date from either 2013 or 2011, so the experience in 2019 may be underwhelming. And it could still require proprietary firmware for tasks like decoding video.

Key challenges

The biggest challenge for any alternative mobile OS is bootstrapping an app ecosystem. A platform with few apps struggles to attract users, and with few users there's little incentive for developers to make apps. Catch 22. Even Microsoft's efforts weren't enough to create a viable ecosystem around Windows Phone.

There are a couple of strategies companies have tried to avoid the 'app trap':

  • HTML apps: the web is an open platform, and lots of services already have a mobile web interface, although it's often inferior to their native mobile apps. Despite all of the effort going into browsers, though, web applications still tend to feel slower than native ones, and rarely work well offline.
  • Emulate Android: Sailfish does this, albeit with a proprietary component. But if you do it well, developers won't target your platform specifically, so it won't have much to differentiate it from Android.

Open source efforts face another challenge. The Fedora image I installed on my laptop would work on most 64-bit PCs, but with a phone OS, you have to download an image for one specific model. This makes it much harder to casually experiment with an alternative OS: someone needs to build it for the exact phone you have. PostmarketOS is attempting to change this model, but it's very early days. Meanwhile, Google's 'Project Treble' may provide a common interface to build a platform on, although it leaves a lot of code below the common interface under the control of vendors.

What options have I missed? Please point out other FOSS phone projects in the comments.

Why do we love the terminal?

The terminal is a tool many programmers use every day. But unusually for a software tool, it hasn't changed much in quite a long time. When we open up a terminal window, it's so that our computer can pretend to be a VT100 from 1978:


VT100 terminal (from Jason Scott on Wikimedia Commons)

And the way programs talk to it is even older. Terminals are sometimes called TTYs, which is short for teletypewriter:


Teletype model 33 (from AlisonW on Wikimedia Commons)

Our computers have got massively more powerful since the 70s, and the interfaces are immensely richer. We use many other interfaces alongside the terminal, but it's still important and nothing is about to replace it. Why is this basic interface so great?

The answer has three parts: the keyboard, the terminal and the shell. Each one has a set of advantages we can use in designing other applications and systems.


Keyboard interfaces

The keyboard gives us a glorious power to talk to the computer. My laptop has 83 keys, and my fingers mostly know their own way to the keys I want, with little thought involved. We can also combine keys (like Ctrl-C) to give more meanings. In comparison, moving a mouse to click on things feels very limiting.

Some programmers really care about their keyboards, and spend considerable sums on ergonomic keyboards or special key switches. But even a basic USB or laptop keyboard is a very effective way for your fingers to control the computer.

Mice have their place too: the keyboard would be a clumsy tool for editing photos, for instance. And increasingly we interact with our computers via touchscreens, which have their own advantages and disadvantages. But for many tasks, it's hard to beat fingers on physical keys. Switching between keyboard and mouse (or touchscreen) also slows us down, so if we're already using the keyboard, we like to keep our fingers there.

Applications are starting to recognise the benefits of keyboard interfaces again. I'm writing this post in Atom, which is one of several modern editors with a command palette:


I open the command palette with a keyboard shortcut, it searches the available commands as I type, and it's smart enough to deal with spelling mistakes. Keyboard interfaces are often considered more difficult for new users, but this design can be easier than searching through nested menus or trying to remember dozens of toolbar icons.


The terminal itself

The terminal is that grid of fixed-width characters (i and m take the same space, unlike in normal text), and the interfaces that control it. There are a few great things about the terminal:

Firstly, the interface is established, stable, and predictable. If a program writes '\x1b1;31m' to the terminal, the output will turn red. This 'ANSI escape code' works in Gnome terminal, in Konsole, Xterm, iTerm, and in the web-based terminal output from Travis CI jobs. It will work with a terminal from 10 years ago, and probably one from 10 years in the future. Terminals did once have compatibility issues, and the terminfo database was built to handle this, but today we rarely have to worry about it.

Secondly, the terminal is mostly controlled by in-band signalling—special codes, like the 'switch to red' code above, are written to and read from the terminal. There are a few exceptions, like changing the size of the terminal window, which use other API calls, but mostly it comes down to reading and writing bytes. Those bytes can easily be sent over the network to control a terminal remotely, e.g. over SSH. They can even be recorded.

Thirdly, the terminal is very efficient. My laptop has over 2 million pixels, but a full screen terminal only has around 12500 characters. So using SSH requires only a tiny amount of bandwidth, because it sends characters rather than pixels. And SSH isn't specially designed for poor connections: Mosh is a modern alternative which is.


The shell

There are many programs you can run in a terminal, but most often you're interacting with a 'system shell' such as bash or zsh. You type a command in, press enter, the computer runs it and prompts you for another.

These commands are definitely harder to learn than clicking around files and settings. But if you need to ask for help online, an explanation of what to click is much more work than a command to copy+paste, both for you and for the person answering. And once you know some commands, it's easy to put them into a script to repeat a procedure you need to do often—there's no convenient equivalent for clicking things.

Writing a command-line program to be run from the shell is much less work than designing and creating a GUI (graphical user interface), whatever programming language and toolkit you're using. This is especially true if the program needs to run on different platforms. So there are lots of command line tools with no GUI equivalent—like pip, which installs Python packages, or nikola, which builds this blog. The obvious way to use these tools is in the terminal.

The combination of these three pieces—the keyboard, the terminal API and the command line—along with all the programs written for this environment, means that we'll be typing commands into a terminal for a long time yet, even if it looks arcane and outdated.

I can't import it!

Have you installed a Python package, but now you can't import it? Or can you import it when you run code one way, but not when you run code another way?

These are common problems, and this post is an attempt to explain what's going on and how you can fix it. If someone has sent you a link here, they're not trying to be rude: we can help more people more effectively by sharing a detailed explanation than by writing a short explanation each time.

Lots of Pythons

There's probably more than one copy of Python on your computer. When you run code in different ways, it might get run by different Pythons. To see which Python is running your code, run:

import sys

On my laptop, all of these are different Pythons (and they aren't all of them):

  • /usr/bin/python (system Python 2)
  • /usr/bin/python3 (system Python 3)
  • /home/takluyver/miniconda3/bin/python (Python in conda)
  • /home/takluyver/miniconda3/envs/py36/bin/python (a separate conda environment)

Each conda environment, virtualenv or pyenv you create is a separate Python. But you can also have different names for the same Python -- e.g. /usr/bin/python3 is the same as /usr/bin/python3.5 on my laptop (it's a 'symlink').

Each Python installation or environment has its own package folder. When you install a package, it's being installed for one particular Python, which might not be the one you are trying to load it from.

You can also see all of the places that import will look for a module by running:

import sys

What can I do?

First, use the code above to find which Python is running your code.

Option 1: Install the package for the same Python you're using.

With pip, this is easy:

path/to/python -m pip install some_package

If you want to install it with conda, activate the conda environment containing the Python you're using, and then run conda install some_package.

Option 2: Switch to use a Python where the module you want is installed.

It's hard to provide instructions for this, because it depends on the tools you're using. If you're using an IDE, look in the settings. Here it is in Pycharm, for instance:


Understanding D-Bus

If you use desktop Linux, like I do, D-Bus is working behind the scenes to do things like storing passwords and displaying notifications - and probably many more things I'm not even aware of.

I've run into D-Bus a couple of times, and found it confusing, so I volunteered to do a short talk about it at Portsmouth LUG, forcing myself to sit down and understand it. I've come to like parts of it, and I even wrote a Python D-Bus interface called Jeepney. But it's complicated, and I haven't found an explanation I like. So this is what I think is worth understanding about D-Bus. It's not a tutorial about how to use D-Bus, but if you've copied and pasted D-Bus code from somewhere, it might help you use it better.

It's about messages

D-Bus is a framework for different programs to communicate. That means they send each other messages. Unfortunately, that simple idea gets a bit buried by the Java-inspired, non-optional object-oriented layers on top of the messages.

Guess which parts I still don't like.

There are two messaging patterns on offer:

  1. Method calls go to a single destination, and get a reply, which is either a method return, or an error.
  2. Signals are sent out to anyone who's interested, and there's no response.

That's four message types in total, including returns and errors. All four can carry data: different kinds of number, UTF-8 strings, arrays and dictionaries, and it's even possible to transfer a file descriptor in a D-Bus message.

The bus is the hub that passes messages on to the right destinations. All the other programs using D-Bus send and receive messages through the bus. You can also send messages to the bus itself, to tell it who you are or to subscribe to a signal.

There are normally two buses available: the session bus is for your login (e.g. desktop notifications), while the system bus handles system-wide stuff (e.g. USB stick plugged in).

Names, names, names

To show a desktop notification, you'll send messages to the object /org/freedesktop/Notifications, using the interface org.freedesktop.Notifications, on the bus name org.freedesktop.Notifications. Ugh. Why so many names?

The bus name is which program you're talking to. Each program that connects to the bus gets a unique name automatically (like :1.247). It can then claim other names, like org.freedesktop.Notifications. The bus ensures that only one program has each name. On my laptop, Gnome shell has claimed org.freedesktop.Notifications, along with many other names.

If you send a message to a bus name which isn't claimed, the bus may be able to start a program to handle it. This is called D-Bus activation, and it's governed by simple files saying which program to start for which name.

The object path refers to something inside the program which will handle the message. For notifications, there's only one object to deal with: /org/freedesktop/Notifications. But the secrets API provided by Gnome keyring, for instance, has an object for each secret stored, such as /org/freedesktop/secrets/collections/login/2.

Finally, the interface indicates which set of methods and signals are to be used with the object. There's usually only one interface that's interesting for applications, but most objects also have some standard interfaces like org.freedesktop.DBus.Introspectable, which you can use to examine the methods and signals of an object.

All these names start with reversed domain names, so people don't accidentally use the same name for different things. The examples I've given are defined by freedesktop.org. If I was defining my own D-Bus API, the names might look like io.github.takluyver.projectname.

Developer tools

There are a couple of nice tools available for investigating D-Bus:

D-Feet screenshot

D-Feet shows you what D-Bus objects there are on your system, the methods you can call and signals you can subscribe to. It can also execute methods and show you the results, but this feels a bit clunky.

Bustle screenshot

Bustle records and shows messages being sent over D-Bus. There's also a command line tool called dbus-monitor which serves a similar purpose.

Other details

D-Bus uses Unix sockets, which are like network sockets but within one computer. Unix sockets are pretty clever in their own right. Each bus creates one Unix socket, and programs connect to it to talk to the bus.

Actually, D-Bus can also run over network sockets, or a number of other transports. Two processes can also use the protocol without a bus running. This is all in the spec, but the usual way to use D-Bus is with a bus and a Unix socket.

Why did I write yet another package manager?

Update July 2017: Batis is no longer maintained. I think something like it would have been great a few years earlier, but now I hope that Flatpak or Snaps will give us a better option for distributing desktop applications.

Batis is a package manager for desktop applications on Linux. Linux users are probably already used to using several different package managers - apt, pip, npm, etc. - so why on earth do we need one more?

Firstly, having many package managers is not a problem. There are many different kinds of things we want to install. Pip, for instance, knows how to install python packages; if you want to install a ruby package, you'll need another tool. The atom text editor includes a package manager, apm, purely for its own extensions, and other extensible applications, like browsers, are quietly doing their own package management too. You can try to build one package manager to rule them all, as Linux distros do, but that system has to be more complex. ‘Do one thing well’ applies to package managers too.

The best way to distribute desktop applications on Linux at present is through distro repositories. The idea is that power users from each distro who like your application will prepare packages, making it easily available for other users of the same distro. But this has several drawbacks for application developers:

  • You’re not in control of your distribution channels - if a distro doesn’t like your app, they can block their users’ easy access to it.
  • Feature releases of your app are tied to releases of the entire distro. Launch version 2.0 at the wrong time, and even the keen Ubuntu users who update on release day it comes out won’t see your new features for six months. Some users will be stuck on old versions for years. There are exceptions, but this is the norm in the major Linux distributions.
  • If you want to bypass the distros and host your own repositories, you have to deal with several different packaging systems - at a minimum, .deb and .rpm packages. Desktop Linux is a small market anyway, and this fragmentation makes it even more painful to target.
  • Your install instructions either have to be vague, or contain a table of commands for different distros (like 0 A.D. or git-cola). Neither is ideal for less technical users.
  • Distro packages can usually only be installed as root. This often doesn’t matter for desktop use, where the user is usually the owner of the computer, but it can be an annoying restriction.

The tools for packaging modules in many programming languages can also be used to package command line applications. You've probably seen utilities that you install using pip (e.g. Nikola), npm (bower), or gem (Jekyll). But these tools don't know about creating menu entries or file associations, so they're not great for distributing graphical applications.

Many applications forgo all of these packaging mechanisms, and distribute tarballs or zip files from their own website (e.g. Powder Toy, PyCharm, Visual Studio Code). This is the starting point for Batis. Batis adds a consistent way to install and uninstall applications, so that developers can focus on their applications, not on rewriting Linux install code. A Batis package is a regular tarball with some extra files, so there’s no need to build another tarball for users without Batis. You even get a free install.sh script inside your package for those users to run.

Batis adds one extra layer above tarball downloads, the index file. This is a JSON file containing the URLs of the tarballs for download, along with some basic metadata. Batis uses this to select the best build to install - for instance if you have separate builds for 64 bit and 32 bit systems. In future versions, the index will also be used to check for updates to installed applications.

So, Batis is a distro-agnostic way for users to get applications directly from developers. It works with the standard mechanisms to integrate applications into the desktop environment. And it's an evolutionary improvement on distributing plain tarballs.

Batis website—for more information

So you want to write a desktop app in Python

This is an overview of the best tools and the best resources for building desktop applications in Python.

First things first. You can build great desktop applications in Python, and some are widely used (like Dropbox). But you'll have to find your own way much more than you would using Microsoft's or Apple's SDKs. The upside is that, with a bit of legwork to package it appropriately, it's quite feasible to write a Python application that works on all the major platforms.

GUI toolkits

The first thing you'll need to choose is a GUI toolkit.

Qt logo
  • For traditional desktop UIs, Qt is a clear winner. It's powerful, looks native on all the major platforms, and has probably the biggest community. There are two different Python bindings: PyQt is older and more mature, but it's only free if your application is open source (licensing), while PySide is newer and more permissively licensed (LGPL). I refer to the main Qt docs a lot - the C++ examples mostly translate to Python quite well - but both PyQt's and PySide's docs contain some useful information. Qt Designer is a drag and drop interface to design your UI; you can compile its .ui files to Python modules with the pyuic command line tool.

Qt Designer in action

Kivy logo
  • For attractive, tablet-style interfaces, Kivy is the right choice. It's a fairly young but promising system. If you want to bring your application to tablets and smartphones, then Kivy is the only option that I'm aware of. More info
  • When you want a basic GUI and don't care about aesthetics, Tkinter is a simple option. It's installed as part of Python. Python's own tkinter documentation is rather minimal, but it links to a bunch of other resources. This site is my favourite - it hasn't been updated in years, but then neither has Tkinter (except that in Python 3, you import tkinter rather than import Tkinter).
  • pygame is popular for building simple 2D games. There are also frameworks for 3D graphics (pyglet, Panda3d), but I don't know much about them.
  • An increasingly popular option is to write your application as a local web server, and build the UI in HTML and Javascript. This lets you use Python's large ecosystem of web frameworks and libraries, but it's harder to integrate with desktop conventions for things like opening files and window management. CEF Python lets you make a window for your application, based on Google Chrome, but I haven't tried that.

A couple of alternatives I wouldn't recommend unless you have a reason to prefer them: GTK is popular on Linux, but it looks ugly on other platforms. The older pygtk bindings have excellent documentation; the newer PyGObject system, which supports recent versions of GTK and Python, doesn't (though it's getting better). wx seems to have a good community, but development is slow, and new projects that could have used it now mostly seem to pick Qt.

Packaging and Distribution

This is probably the roughest part of making an application in Python. You can easily distribute tools for developers as Python packages to be installed using pip, but end users don't generally have Python and pip already set up. Python packages also can't depend on something like Qt. There are a number of ways to package your application and its dependencies:

  • Pynsist, my own project, makes a Windows installer which installs a version of Python that you specify, and then installs your application. Unlike the other tools listed here, it doesn't try to 'freeze' your application into an exe, but makes shortcuts which launch .py files. This avoids certain kinds of bugs.
  • cx_Freeze is a freeze tool: it makes an executable out of your application. It works on Windows, Mac and Linux, but only produces the executable for the platform you run it on (you can't make a Windows exe on Linux, for example). It can make simple packages (.msi for Windows, .dmg for Mac, .rpm for Linux), or you can feed its output into NSIS or Inno Setup to have more control over building a Windows installer.
  • PyInstaller is similar to cx_Freeze. It doesn't yet support Python 3 (update: it does now, since October 2015), but it does have the ability to produce a 'single file' executable.
  • py2app is a freeze tool specifically for building Mac .app bundles.
  • py2exe is a Windows-only freeze tool. Development stopped for a long time, but at the time of writing there is some recent activity on it.

Linux packaging

Although some of the freeze tools can build Linux binaries, the preferred way to distribute software is to make a package containing just your application, which has dependencies on Python and the libraries your application uses. So your package doesn't contain everything it needs, but it tells the package manager what other pieces it needs installed.

Unfortunately, the procedures for preparing these are pretty complex, and Linux distributions still don't have a common package format. The main ones are deb packages, used by Debian, Ubuntu and Mint, and rpm packages, used by Fedora and Red Hat. I don't know of a good, simple guide to packaging Python applications for either - if you find one or write one, let me know.

You can get users to download and install your package, but if you want it to receive updates through the package manager, you'll need to host it in a repository. Submitting your package to the distribution's main repositories makes it easiest for users to install, but it has to meet the distro's quality standards, and you generally can't push new feature releases to people except when they upgrade the whole distribution. Some distributions offer hosting for personal repos: Ubuntu's PPAs, or Fedora's Fedorapeople repositories. You can also set up a repository on your own server.

If you don't want to think about all that, just make a tarball of your application, and explain to Linux users next to the download what it requires.


  • Threading: If your application does anything taking longer than about a tenth of a second, you should do it in a background thread, so your UI doesn't freeze up. Be sure to only interact with GUI elements from the main thread, or you can get segfaults. Python's GIL isn't a big issue here: the UI thread shouldn't need much Python processing time.
  • Updates: Esky is a framework for updating frozen Python applications. I haven't tried it, but it looks interesting.

ASTsearch - code searching that knows about code

This weekend's hack is a tool for searching Python code.

ASTsearch source code on Github

What's wrong with grep, you might ask? Let's try to find every division in IPython's codebase:

$ grep --include "*.py" -rF "/" .
config/loader.py:        after applying any insert / extend / update changes
config/configurable.py:                    # ConfigValue is a wrapper for using append / update on containers
config/tests/test_loader.py:        argv = ['--a=~/1/2/3', '--b=~', '--c=~/', '--d="~/"']
config/tests/test_loader.py:        self.assertEqual(config.a, os.path.expanduser('~/1/2/3'))
config/tests/test_loader.py:        self.assertEqual(config.c, os.path.expanduser('~/'))
config/tests/test_loader.py:        self.assertEqual(config.d, '~/')

In all, it finds 1685 lines, and very few of them are actual division. You could write a regex that tries to ignore comments and strings, but now you have two problems.

Let's do the same with ASTsearch:

$ astsearch "?/?"
 646|        shalf = int((string_max -5)/2)

1254|        return h / i

 347|        whalf = int((width -5)/2)

The output is 89 lines, and when spacing and filenames are removed, there are 46 results, all of which represent division operations.

In this case, grep produced a lot of false positives. In other cases, it will have false negatives—results that you wanted but didn't find. a=1 won't match a= 1, and "this" won't match 'this'. For simple cases, regexes can help (a\s*=\s*1), but they soon get unwieldy. ASTsearch is insensitive to how you format your code: even statements split over several lines are easy to find.

How does it work?

The string pattern—?/? in the example above—is turned into an AST pattern. ASTs, or Abstract Syntax Trees, are a structured representation of a formal language such as Python source code.

? is a wildcard, so ?/? means "anything divided by anything". I picked ? for this because it's not used in Python syntax, so it doesn't stop you writing more specific search patterns.

Some more patterns:

  • a = ? - Something is assigned to a
  • class ?(TemplateExporter): ? - A subclass of TemplateExporter
  • for ? in ?: ? \nelse: ? - A for loop with an else clause

Then it walks the directory, parsing each file with a .py extension using Python's built in parser. The standard library ast module contains the tools to parse the code and walk the AST, and astcheck, another tool I wrote, can compare AST nodes against a template.

Besides the command line interface, you can also use ASTsearch as a Python module (import astsearch). It's possible to define complex search patterns in Python code that can't be written at the command line. See the docs for some more details.

What's the catch?

ASTsearch only works on Python files, and Python files that are entirely valid syntax (that's Python 3 syntax for now). If just the last line can't be parsed, it won't find any matches in that file.

It's slower than grep, because what it's doing is much more complex, and grep is highly optimised. But Python's parser is doing most of the hard work, and that's written in C. On my laptop, scanning the IPython codebase (about 100k lines of code) takes about 3.5 seconds—definitely not instant, but far faster than I can think about even a couple of results.

There are search patterns you can't express at the command line. For instance, you can't match function calls with a specific number of arguments (but you can find function definitions with a given number of arguments: def ?(?, ?): ?). I might extend the pattern mini-language once I've got a feel for what would be useful.

How do I install it?

pip install astsearch

Readable Python coroutines

Quick exercise: write a piece of code that, each time you pass it a word (a string), tells you if you've passed it that word before. If you're reading a post with a title like this, it shouldn't take you more than a few minutes. For bonus points, have an option to ignore case, so it counts 'parrot' and 'Parrot' as the same word.

What did you go for? A function with a global variable (yuck!)? A class with a method? A closure?

How about a coroutine? Here's what that would look like:

def have_seen(case_sensitive=False):
    seen = set()

    res = None
    while True:
        word = (yield res)
        if not case_sensitive:
            word = word.lower()

        res = (word in seen)

And here's how you would use it:

>>> hs = have_seen()
>>> next(hs)  # prime it
>>> hs.send('Hello')
>>> hs.send('World')
>>> hs.send('hello')

Coroutines in Python are based on the generator machinery - see the yield keyword in there? PEP 342, "Coroutines via Enhanced Generators", added the necessary features to Python 2.5, but it's not a very well known part of the language. And it's not hard to see why - the code above isn't as clear as it should be:

  • Emitting and receiving a value happen in the same yield expression. So rather than yielding the response at the bottom of the loop, we have to store it in a variable and jump back to the top of the loop.
  • The coroutine has to emit a value before it can receive one, even though there's nothing it really wants to emit. That's why we set res = None before the loop, and why the caller has to prime it by calling next(hs) before using it. It's easy to write a decorator that calls next for you, but that doesn't make the code inside the coroutine any clearer.

So the standard Python syntax is rather awkward. But we can make it clearer by using a bit of wrapper code. The trick is separating sending a value from receiving one:

from coromagic import coroutine, receive

def have_seen2(case_sensitive=False):
    seen = set()

    while True:
        word = (yield receive)
        if not case_sensitive:
            word = word.lower()

        yield (word in seen)

We no longer need the res variable. Instead, we alternate between two uses of yield: a receiving yield, where we send the wrapper a token to indicate that we're ready for a new value, and a sending yield, where we don't expect to get a value back. The caller can use this in exactly the same way as the original coroutine, except that the wrapper primes it automatically, so there's no need to call next(hs).

The wrapper expects a receiving yield first, and at most one sending yield after each receiving yield. If a receiving yield is followed by another receiving yield, without a sending yield inbetween, None is returned to the caller, just like a function without a return statement.

Handling exceptions

If either of our coroutines above raises an exception, we can't keep using that coroutine:

>>> hs.send(12)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "coro_ideas.py", line 8, in have_seen
    word = word.lower()
AttributeError: 'int' object has no attribute 'lower'
>>> hs.send('hi')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>

I've got a solution of sorts for that, although it still feels a bit awkward. The coroutine can request a context manager to catch exceptions:

from coromagic import get_exception_context

def have_seen3(case_sensitive=False):
    exception_context = (yield get_exception_context)
    seen = set()

    while True:
        with exception_context:
            word = (yield receive)
            if not case_sensitive:
                word = word.lower()

            yield (word in seen)

The context manager co-ordinates with the wrapper to suppress the exception inside the coroutine, but raise it to the caller:

>>> hs3 = have_seen3()
>>> hs3.send(12)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "./coromagic.py", line 28, in send
    raise self.last_exc
  File "./coro_ideas.py", line 47, in have_seen3
    word = word.lower()
AttributeError: 'int' object has no attribute 'lower'
>>> hs3.send('hi')

Now the error doesn't stop us processing valid input afterwards.

Who cares about coroutines?

I find them interesting on their own. But this isn't just academic - there are cases where coroutines can be the clearest way to write something.

The have_seen example could easily be written with a class or a closure. Coroutines come into their own for making state machines. With a class or a closure, the state has to be stored in a variable, and you need a lookup table to decide how to behave in each state. A coroutine can store the state as the point where its code is executing.

It's hard to come up with an example of this that's both realistic and short, but here's my attempt. We're writing a plugin for a chat application, which lets any chatter say "password foo", silencing everyone until someone guesses "foo". The application just passes us each message, and expects a True/False response saying whether it should be broadcast.

def password_game():
    while True:
        # Normal chatting
        while True:
            msg = (yield receive)
            if msg.startswith("password "):
                password = msg[9:]
                yield False
            yield True  # Broadcast

        # Waiting for someone to guess the password
        while (yield receive) != password:
            yield False # Don't send messages
        yield True   # Show everyone the password once it has been guessed

In IPython, we have some coroutines for input processing. For instance, the transformer to strip prompts from pasted code processes the first two lines in a prompt-detection state. Then it moves into a prompt-stripping state if it detected a prompt, or a no-op state if it didn't.

The pattern of sending and receiving is also reminiscent of writing a thread with input and output queues, and waiting for values on those queues. But threads are messy: you have to deal with synchronisation and shut them down safely. Calling a cororoutine is as deterministic as calling a function: it runs, returns a value, and the calling code carries on. Of course, that means that coroutines themselves don't run in parallel. But you can use them to build clever things like tulip, which will become the asyncio module in Python 3.4. Tulip can suspend one coroutine and run others while it waits for data, and then resume it when the data it needs is ready.

The best resource on coroutines in Python is this excellent course by David Beazley.

Coromagic source code

This is the module used in the examples above.


from functools import wraps

# Tokens
receive = object()
get_exception_context = object()

class CoroutineWrapper(object):
    last_exc = None

    def __init__(self, generator):
        self.gen = generator

        ready = next(self.gen)
        if ready is get_exception_context:
            ready = self.gen.send(ExceptionContext(self))
        assert ready == receive

    def send(self, arg):
        self.last_exc = None

        res = self.gen.send(arg)
        if res is receive:
            res = None
            assert next(self.gen) is receive

        if self.last_exc is not None:
            raise self.last_exc

        return res

def coroutine(genfunc):
    """Decorator for a generator function to wrap it as a coroutine."""
    def wrapped(*args, **kwargs):
        return CoroutineWrapper(genfunc(*args, **kwargs))

    return wrapped

class ExceptionContext(object):
    def __init__(self, corowrapper):
        self.corowrapper = corowrapper

    def __enter__(self):

    def __exit__(self, type, value, tb):
        if type is None:
        if type is GeneratorExit:
            return False

        # Pass other exceptions to the wrapper, and silence them for now
        self.corowrapper.last_exc = value
        return True