Showing posts with label planet-python. Show all posts
Showing posts with label planet-python. Show all posts

2014-06-20

What's the difference between StaticPython and PyRun?

This blog post explains the difference between StaticPython and PyRun.

Both StaticPython and PyRun are free, portable binary distributions of Python 2.x and 3.x, for Linux, Mac OS X (and PyRun also works on FreeBSD without Linux emulation). That is, a user without root access can quickly download, extract and run either StaticPython or PyRun, and it won't conflict with the (possibly old) Python versions installed to the system (by root). Windows users should use Portable Python instead.

An important difference is that StaticPython doesn't need any external files to run on Linux, while PyRun needs lots of system libraries (libssl.so.1.0.0, libcrypto.so.1.0.0, libz.so.1, libsqlite3.so.0, libz2.so.1, libpthread.so.0, libdl.so.2, libutil.so.1, libm.so.6, libc.so.6), and will not work on systems which don't have all of the required libraries installed (with the required version), so PyRun is much less portable.

Another important difference is that with PyRun you can compile and install C extensions, and with StaticPython you can't (even ctypes doesn't work with StaticPython). However, many extensions are precompiled: Stackless + greenlet + libssl + Tokyo Cabinet + libevent2 + Concurrence + AES + Syncless + MessagePack + Gevent MySQL + PyCrypto.

Another important difference is that PyRun integrates nicely with packaging (setuptools, pip etc.), and is a nice, self-contained replacement for virtualenv. StaticPython doesn't help you with packaging, you have to do Python library installation and distribution manually.

There are many small differences, e.g. PyRun is available for both 32-bit and 64-bit, ans StaticPython is 32-bit only. PyRun needs at least 14 megabytes of disk space (10 megabytes for the binary executable and about 4 megabytes for the mandatory system library dependencies), and StaticPython needs between 6 and 9 megabytes depending on which feature set you need.

My recommendation: If PyRun works on your system (because you have the right version of system libraries installed, and you have enough free disk space), then use PyRun, otherwise use StaticPython.

Caveat: I am the author of StaticPython.

2013-12-01

Binary search in line-sorted text files

This blog post announces pts-line-bisect, a C program and an equivalent Python script and library for doing binary seach in line-sorted text files, and it also explains some of the design details of the Python implementation.

Let's suppose you have a sorted text file, in which each line is lexicographically larger than the previous one, and you want to find a specific range of lines, or all lines with a specific prefix.

I've written the program pts-line-bisect for that recently. See the beginning of the README on many examples how to use the C program. The Python program has fewer features, here is how to use it:

  • Closed ended interval search: print all lines at least bar and at most foo:
    $ python pts_line_bisect.py -t file.sorted bar foo
  • Open ended interval search: print all lines at least bar and smaller than foo:
    $ python pts_line_bisect.py -e file.sorted bar foo
  • Prepend position: print the smallest offset where foo can be inserted (please note that this does not report whether foo is present, always exit(0)).
    $ python pts_line_bisect.py -oe file.sorted foo
  • Append position: print the largest offset where foo can be inserted (please note that this does not report whether foo is present, always exit(0)).
    $ python pts_line_bisect.py -oae file.sorted foo
  • Exact range: print the range (start byte offset and after-the-last-byte end offset) of lines containing only foo:
    $ python pts_line_bisect.py -ot file.sorted foo

Please note that these tools support duplicate lines correctly (i.e. if the same line appears multiple times, in a block).

Please note that these tools and libraries assume that the key is at the beginning of the line. If you have a sorted file whose lines contain the sort key somewhere in the middle, you need to make changes to the code. It's possible to do so in a way that the code will still support duplicate keys (i.e. records with the same key).

Analysis

I've written the article titled Evolution of a binary search implementation for line-sorted text files about the topic, containing the problem statement, the possible pitfalls, an analysis of some incorrect solutions available on the web as code example, a detailed specification and explanation of my solution (including a proof), disk seek and speed analysis, a set of speedup ideas and their implementation, and further notes about the speedups in the C implementation.

As a teaser, here is an incorrect solution in Python:

def bisect_left_incorrect(f, x):
  """... Warning: Incorrect implementation with corner case bugs!"""
  x = x.rstrip('\n')
  f.seek(0, 2)  # Seek to EOF.
  lo, hi = 0, f.tell()
  while lo < hi:
    mid = (lo + hi) >> 1
    f.seek(mid)
    f.readline()  # Ignore previous line, find our line.
    if x <= f.readline().rstrip('\n'):
      hi = mid
    else:
      lo = mid + 1
  return lo

Can you spot the all the 3 bugs?

Read the article for all the details and the solution.

About sorting: For the above implementation to work, files must be sorted lexicographically, using the unsigned value (0..255) for each byte in the line. If you don't sort the file, or you sort it using some language collation or some locale, then the linked implementation won't find the results you are looking for. On Linux, use LC_ALL=C sort <in.txt >out.txt to sort lexicographically.

As a reference, here is the correct implementation of the same algorithm for finding the start of the interval in a sorted list or other sequences (based on the bisect module):

def bisect_left(a, x, lo=0, hi=None):
  """Return the index where to insert item x in list a, assuming a is sorted.

  The return value i is such that all e in a[:i] have e < x, and all e in
  a[i:] have e >= x.  So if x already appears in the list, a.insert(x) will
  insert just before the leftmost x already there.

  Optional args lo (default 0) and hi (default len(a)) bound the
  slice of a to be searched.
  """
  if lo < 0:
    raise ValueError('lo must be non-negative')
  if hi is None:
    hi = len(a)
  while lo < hi:
    mid = (lo + hi) >> 1
    if x <= a[mid]:  # Change `<=' to `<', and you get bisect_right.
      hi = mid
    else:
      lo = mid + 1
  return lo

A typical real-word use case for such a binary search tool is retrieving lines corresponding to a particular time range in log files (or time based measurement records). These files text files with variable-length lines, with the log timestamp in the beginning of the line, and they are generated in increasing timestamp order. Unfortunately the lines are not lexicographically sorted, so the timestamp has to be decoded first for the comparison. The bsearch tool does that, it also supports parsing arbitrary, user-specifiable datetime formats, and it can binary search in gzip(1)ped files as well (by building an index). It's also of high performance and low overhead, partially because it is written in C++. So bsearch is practical tool with lots of useful features. If you need anything more complicated than a lexicographic binary search, use it instead.

Before getting too excited about binary search, please note that there are much faster alternatives for data on disk. In an external (file-based) binary search the slowest operation is the disk seek needed for each bisection. Most of the software and hardware components would be waiting for the hard disk to move the reading head. (Except, of course, that seek times are negligible when non-spinning storage hardware such as SSD or memory card is used.)

An out-of-the box solution would be adding the data to more disk-efficient key-value store. There are several programs providing such stores. Most of them are based on a B-tree, B*-tree or B+-tree data structure if sorted iteration and range searches have to be supported, or disk-based hashtables otherwise. Some of the high-performance single-machine key-value stores: cdb (read-only), Tokyo Cabinet, Kyoto Cabinet, LevelDB; see more in the NoSQL software list.

The fundamental speed difference between a B-tree search and a binary search in a sorted list stems from the fact that B-trees have a branching factor larger than 2 (possibly 100s or 1000s), thus each seeking step in a B-tree search reduces possible input size by a factor larger than 2, while in a binary search each step reduces the the input size by a factor 2 only (i.e. we keep either the bottom half or the top half). So both kinds of searches are logarithmic, but the base of the logarithm is different, and this causes a constant factor difference in the disk seek count. By careful tunig of the constant in B-trees it's usual to have only 2 or 3 disk seeks for each search even for 100GB of data, while a binary search in a such a large file with 50 bytes per record would need 31 disk seeks. By taking 10ms as the seek time (see more info about typical hard disk seek times), a typical B-tree search takes 0.03 second, and a typical binary search takes 0.31 second.

Have fun binary searching, but don't forget to sort your files first!

2011-12-18

How to log out from an AppEngine app only

This blog post explains how to add a logout page to an AppEngine Python application, which will log out of the application only, without logging out of all of Google (e.g. Gmail, Calendar, Picasa, YouTube).

The default AppEngine users.create_logout_url(...) crates a logout URL which would log out from all of Google. To log out from the AppEngine app only, remove the SACSID and ACSID session cookies, which AppEngine has set right after logging in. Here is how to do it in Python:

import Cookie
import os
from google.appengine.api import users
from google.appengine.ext import webapp

class LogoutPage(webapp.RequestHandler):
  def get(self):
    target_url = self.request.referer or '/'
    if os.environ.get('SERVER_SOFTWARE', '').startswith('Development/'):
      self.redirect(users.create_logout_url(target_url))
      return

    # On the production instance, we just remove the session cookie, because
    # redirecting users.create_logout_url(...) would log out of all Google
    # (e.g. Gmail, Google Calendar).
    #
    # It seems that AppEngine is setting the ACSID cookie for http:// ,
    # and the SACSID cookie for https:// . We just unset both below.
    cookie = Cookie.SimpleCookie()
    cookie['ACSID'] = ''
    cookie['ACSID']['expires'] = -86400  # In the past, a day ago.
    self.response.headers.add_header(*cookie.output().split(': ', 1))
    cookie = Cookie.SimpleCookie()
    cookie['SACSID'] = ''
    cookie['SACSID']['expires'] = -86400
    self.response.headers.add_header(*cookie.output().split(': ', 1))
    self.redirect(target_url) 

...

application = webapp.WSGIApplication([..., ('/logout', LogoutPage), ...])
def main():
  run_wsgi_app(application)
if __name__ == '__main__':
  main()

After adding the /logout page as indicated above, offer the logout link like this:

self.response.out.write('<a href="/logout">Logout</a>')

Please note that adding a cookie which expires in the past makes the browser forget about the cookie immediately.

2011-06-24

Python 3.2 binaries released in StaticPython for Linux

This blog post is to announce that StaticPython, a binary distribution of Python, has just released Python 3.2 binaries for Linux. (Previously StaticPython contained only Python 2.7 binaries.) Download links for the binaries:

  • python3.2: almost all built-in Python modules, C extensions (including sqlite3) and greenlet
  • stackless3.2: ditto, but with Stackless Python
  • stacklessxl3.2: ditto, and also OpenSSL (with the _ssl and _hashlib modules)

The Python 3.2 binaries of StaticPython can be useful on Linux and FreeBSD systems where Python 3.2 distribution packages are not available, and recompiling from source would be too inconvenient. They can also be used to demo the features of Python 3 for users with a little time and patience, and without a commitment to install it.

2011-02-01

How to run Syncless on Linux without installing it?

This blog post explains how to run asynchronous, coroutine-based client and server I/O libraries like Syncless, gevent and Concurrence on Linux without installing them.

On Linux, it's possible to try Syncless without installation, using the StaticPython binary Python distribution, like this:

$ wget -O stacklessco2.7-static \
  https://raw.githubusercontent.com/pts/staticpython/master/release/stacklessco2.7-static
$ chmod +x stacklessco2.7-static
$ ./stacklessco2.7-static
Python 2.7.1 Stackless 3.1b3 060516 (release27-maint, Feb  1 2011, 16:57:16)
[GCC 4.1.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from syncless import coio
>>> coio.sleep(1.5)
(sleeping for 1.5 second)
<object object at 0xf7709490>
>>>

Limitations:

  • This solution requires Linux on x86 (i386) or x86_64 (amd64) architecture. If you have a different Unix system, you have to install Syncless the regular way (e.g. with sudo easy_install syncless).
  • This solution doesn't give you the newest version of Syncless.
  • You can't use Python extensions written in C, except for those compiled into stacklessco2.7-static. See the full list on the StaticPython home page.

Please note that StaticPython can run many other I/O frameworks and libraries, which don't require compiling C extensions. Such libraries include Eventlet, Tornado, Twisted, asyncore and circuits.

2010-10-30

pysshsftp: proof-of-concept SFTP client for Unix which uses the OpenSSH ssh(1) command-line tool

This blog post is an announcement for pysshsftp, a proof-of-concept, educational SFTP client for Unix, which uses the OpenSSH ssh(1) command-line tool (for establishing the secure connection), but it doesn't use the sftp(1) command-line tool (so it can e.g. upload files without truncating them first).

Only very little part of the SFTP protocol has been implemented so far (initialization, uploading, and stat()ting files). The SFTP protocol was reverse-engineered from sftp-client.c in the OpenSSH 5.1 source code.

The motivation behind writing pysshsftp was to have an SFTP client which

  • supports uploading files without truncating them (the OpenSSH sftp(1) always truncates the file before uplading data bytes);
  • can be easily scripted from Python;
  • uses the OpenSSH(1) command-line tool for establishing the secure connection (other Python libraries like pysftp and paramiko can't use the user's public key configuration properly by default: they don't support passphrase reading for passphrase-protected keys, they don't support reading keys from ssh-agent, and they don't support reading ~/.ssh/id_rsa and ~/.ssh/id_dsa exactly the same way as OpenSSH uses them).

2010-10-29

Multilingual programs in Haskell, Prolog, Perl and Python

This blog posts shows program source codes that work in multiple languages (including Haskell, Prolog, Perl, Perl and Python).

Here is a program which works in Haskell, Prolog and Perl:

$ cat >m1.prg <<'END'
foo( {-1/*_} ) if!'*/%'; print "Hello, Perl!\n" if<<'-- */'
}). :- write('Hello, Prolog!'), nl, halt.
/* -} x) = x
main = putStrLn "Hello, Haskell!"
-- */
END
$ perl m1.prg
Hello, Perl!
$ swipl -f m1.prg
Hello, Prolog!
$ cp m1.prg m1.hs
$ ghc m1.hs -o m1
$ ./m1
Hello, Haskell!

Here is a program which works in Haskell, Prolog and Python:

$ cat >m2.prg <<'END'
len( {-1%2:3} ); print "Hello, Python!"; """
}). :- write('Hello, Prolog!'), nl, halt.
/* -} x) = x
main = putStrLn "Hello, Haskell!"
-- """ # */
END
$ python m2.prg
Hello, Python!
$ swipl -f m2.prg
Hello, Prolog!
$ cp m2.prg m2.hs
$ ghc m2.hs -o m2
$ ./m2
Hello, Haskell!

It's possible to have polyglots of 8, 10 and even 22 programming languages, see them here.

2010-08-11

How to try Stackless Python on Linux without installing it?

This blog post gives instructions on trying Stackless Python without installing it on Linux systems (32-bit and 64-bit).

Use the Stackless version of the StaticPython binary executable, which is a portable version of Stackless Python 2.7 for Linux systems. It is linked statically, with most standard Python modules and C extensions included in the executable binary, so no installation is required. Here is how to try it:

$ wget -O stackless2.7-static \
  https://raw.githubusercontent.com/pts/staticpython/master/release/stackless2.7-static
$ chmod +x stackless2.7-static
$ ./stackless2.7-static
Python 2.7 Stackless 3.1b3 060516 (release27-maint, Aug 11 2010, 13:55:35) 
[GCC 4.1.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import stackless
>>> 

See more information about features, suggested uses and limitations on the StaticPython home page.

StaticPython released

This blog post is an announcement of the StaticPython software distribution and its first release.

What is StaticPython?

StaticPython is a statically linked version of the Python 2.x (currently 2.7) interpreter and its standard modules for 32-bit Linux systems (i686, i386). It is distributed as a single, statically linked 32-bit Linux executable binary, which contains the Python scripting engine, the interactive interpreter with command editing (readline), the Python debugger (pdb), most standard Python modules (including pure Python modules and C extensions), coroutine support using greenlet and multithreading support. The binary contains both the pure Python modules and the C extensions, so no additional .py or .so files are needed to run it. It also works in a chroot environment. The binary uses uClibc, so it supports username lookups and DNS lookups as well (without NSS).

Download and run

Download the lastest StaticPython 2.7 executable binary.

Here is how to use it:

$ wget -O python2.7-static \
  https://raw.githubusercontent.com/pts/staticpython/master/release/python2.7-static
$ chmod +x python2.7-static
$ ./python2.7-static
Python 2.7 (r27:82500, Aug 11 2010, 10:51:33) 
[GCC 4.1.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>

For what purpose is StaticPython useful?

  • for running Python scripts in chroot and other restricted environments (e.g. for running untrusted standard Python code)
  • for running CGI scripts on some web hosting providers where Python is not preinstalled (this is slow)
  • for trying the newest Python and running scripts on a very old Linux system or on a system where package installation is not possible or cumbersome
  • for deploying and running Tornado, Eventlet or Twisted-based networking applications on a machine without a working Python installation (please note that the framework itself has also to be deployed)

When isn't StaticPython recommended?

  • if dependencies need Python package installation (e.g. distutils, setuptools, setup.py, Pyrex, Cython) -- distutils is not included, there is no site-packages directory, loading .so files is not supported
  • if dependencies are or need shared libraries (.so files) -- StaticPython doesn't support loading .so files, not even with the dl or ctypes modules
  • if startup and module import has to be fast -- since StaticPython stores .py files (no .pyc) in a ZIP file at the end of the binary
  • for GUI programming -- no GUI or graphics library is included, loading .so files is not supported

How to extend, customize or recompile StaticPython?

Compiling python2.7-static was a one-off manual process. Automating this process has not been implemented yet. Please contact the author if you need this.

Feature details

Features provided

  • command-line editing with the built-in readline module
  • almost all standard Python 2.7 modules built-in
  • almost all standard C extensions built-in: _bisect, _codecs, _codecs_cn, _codecs_hk, _codecs_iso2022, _codecs_jp, _codecs_kr, _codecs_tw, _collections, _csv, _curses, _elementtree, _functools, _heapq, _hotshot, _io, _json, _locale, _lsprof, _md5, _multibytecodec, _multiprocessing, _random, _sha, _sha256, _sha512, _socket, _sockobject, _sqlite3 (added 1004307 bytes to the executable binary size), _sre, _struct, _symtable, _weakref, array, audioop, binascii, bz2, cPickle, cStringIO, cmath, crypt, datetime, errno, fcntl, fpectl, future_builtins, gc, grp, imageop, itertools, math, mmap, operator, parser, posix, pwd, pyexpat, readline, resource, select, signal, spwd, strop, syslog, termios, thread, time, timing, zipimport, zlib
  • greenlet integrated for coroutine support
  • multithreading (using thread and threading as usual)
  • line editing even without a terminfo definition or inputrc file (useful in chroot, provided by libreadline/libncurses by default)
  • the usual help, license used in interactive mode

Executable binary layout

  • Python 2.7 (r27:82500) on Linux i386, statically linked compiled and linked with uClibc, so it supports username lookups and DNS lookups as well (without NSS).
  • pure Python and C extensions integrated to a single, statically linked, i386 (i686) Linux executable
  • compiled with uClibc so it can do DNS lookups without /lib/libss_*so*
  • can run from any directory, even in chroot containing only the binary, even in interactive mode, even without /proc mounted

Features missing

  • missing: loading .so files (C shared libraries, Python C extensions)
  • missing: the ctypes module and the dl module (since no .so file loading support)
  • missing: the distutils module, custom extension installation
  • missing: !OpenSSL, SSL sockets
  • missing: IPv6 support
  • missing: GUI bindings (tkinter, GTK, Qt etc.)
  • missing: Stackless Python

2010-05-29

How to implement a semaphore using mutexes

This blog post shows an educational reference implementation of a semaphore using mutexes in Python.

Please note that the number of mutexes needed in this implementation is 1 + the number of threads waiting in the acquire operation while the value is 0. This is too much, 2 or 3 mutexes altogether would be enough, see the various implementations in Implementation of General Semaphores Using Binary Semaphores.

The implementation is available for download from http://pts-mini-gpl.googlecode.com/svn/trunk/script/semaphore.py.

import thread
from collections import deque


class Semaphore(object):
  def __init__(self, value=1):
    assert value >= 0, 'Semaphore initial value must be >= 0'
    self._lock = thread.allocate_lock()   # Create a mutex.
    self._waiters = deque()
    self._value = value

  def acquire(self, blocking=True):
    """Semaphore P primitive."""
    self._lock.acquire()
    while not self._value:
      if not blocking:
        break
      assert self._lock.locked(), 'wait() of un-acquire()d lock'
      waiter = thread.allocate_lock()
      waiter.acquire()
      self._waiters.append(waiter)
      self._lock.release()
      try:  # Restore state even if e.g. KeyboardInterrupt is raised.
        waiter.acquire()
      finally:
        self._lock.acquire()
    self._value -= 1
    self._lock.release()

  def release(self):
    """Semaphore V primitive."""
    self._lock.acquire()
    self._value += 1
    assert self._lock.locked(), 'notify() of un-acquire()d lock'
    _waiters = self._waiters
    if _waiters:
      _waiters.popleft().release()
    self._lock.release()


if __name__ == '__main__':
  s = Semaphore(3)
  def Reader():
    while True:
      raw_input()
      s.release()
  thread.start_new_thread(Reader, ())
  print 'Press <Enter> twice to continue.'
  for i in xrange(s._value + 2):
    print i
    s.acquire()
  print 'Done.'

2010-05-18

Feature comparison of Python non-blocking I/O libraries

This blog post is a tabular feature comparison of Syncless and the 6 most popular event-driven and coroutine-based non-blocking (asynchronous) networking I/O libraries for Python. It was inspired by http://nichol.as/asynchronous-servers-in-python (published on 2009-11-22), which compares the features and the performance of 14 Python non-blocking networking I/O libraries. We're not comparing generator-based (yield) solutions here. We haven't made performance measurements, so speed-related claims in this comparison are beliefs and opinions rather than well-founded facts. Here are the results:

ConcurrenceEventletTornadoTwistedasyncoregeventcircuitsSyncless
pure Python: can work without compiling C codenoyesyesyesyesnoyesno
pure Python: runs at full speed without compiling C codenoyesyesyesyesnoyesno
standard module in Python 2.6nonononoyesnonono
has asynchronous DNS resolvernoyes, 1)noyes, 2)noyes, 3)noyes, 4)
supports running other tasklets while DNS resolving is in progressnoyesnoyesnoyesnoyes
has fully asynchronous and scalable DNS resolvernoyes, 5)noyesnoyesnoyes
supports timeouts on individual socket send() and recv() operationsyesyesnoyesnoyesnoyes
has WSGI serveryesyesyesyesnoyesyes--, 6)yes
can work with CherryPy's WSGI servernoyes--, 7)nononoyes--, 7)noyes
contains a custom, non-WSGI web frameworkyesnoyesyesnono++, 8)yesno
can run external web frameworks using non-WSGI connectorsnoyes, 9)nononononoyes++, 10)
runs with Stackless Pythonyesyes--, 11)yes, 12)yes, 12)yes, 12)noyes, 12)yes
runs with greenletyes--, 13)yesyes, 12)yes, 12)yes, 12)yesyes, 12)yes
has fast non-blocking socket class implemented in C or Pyrexnononononononoyes
has fast read and write buffer code implemented in C or Pyrexyesnonononoyes, 14)noyes
uses fast (C or Pyrex) buffer code for its socket.socket.makefile()yesnonononononoyes
uses fast (C or Pyrex) buffer code for its ssl.SSLSocket.makefile()no, 15)nonononononoyes
has SSL supportnoyes, 16)noyes, 16)noyes, 16)yes, 16)yes, 16)
has monkey-patchig for socket.socket and other blocking I/O operationsnoyesno, 12)no, 12)no, 12)yesno, 12)yes
prints and recovers from an uncaught exceptionyes, 17)yesyesyesyesyesno, 18)no, 19)
can use libeventyesyesnoyesnoyesnoyes
can use the libevent emulation of libevyesyesnoyesnono, 20)noyes
works without libevent or an emulation installednoyesyesyesyesnoyesyes
avoids C stack copying (which is slower than soft switching)nono, 21)yes, 12)yes, 12)yes, 12)no, 21)yes, 12)no, 22)
can use a high performance event notification primitive (e.g. epoll on Linux)yes, 23)yesyesyesno, 24)yes, 23)yesyes, 25)
nichol.as: What license does the framework have?yes, 26)yes, 26)yes, 27)yes, 26)yes, 26)yes, 26)yes, 28)yes, 29)
nichol.as: Does it provide documentation?yesyesyesyes++, 30)noyesyesyes--, 31)
nichol.as: Does the documentation contain examples?yesyesyesyesnoyesyesyes
nichol.as: Is it used in production somewhere?yes, 32)yes, 33)yes, 34)yesyes, 35)yes, 36)yes, 37)no
nichol.as: Does it have some sort of community (mailinglist, irc, etc..)?yesyesyes++, 38)yesnoyesyesno
nichol.as: Is there any recent activity?no, 39)yesyesyesnoyesyesyes
nichol.as: Does it have a blog (from the owner)?yesyesyes, 40)yes++, 41)noyesyesyes--, 42)
nichol.as: Does it have a Twitter account?yesyesyesyesnoyesyesyes--, 42)
nichol.as: Where can I find the repository? (without links)yes, 43)yes, 44)yes, 43)yes, 45)yes, 46)yes, 47)yes, 44)yes, 48)
nichol.as: Does it have a thread pool?noyesnoyesnonoyesyes--, 49)
nichol.as: Does it provide a HTTP server for WSGI applications?yesyesyes--, 50)yesnoyesyes--, 6)yes
nichol.as: Does it provide access to a TCP Socket?yesyesyesyesyesyesyesyes
nichol.as: Does it have any Comet features?nononononononono
nichol.as: Is it using epoll, if available?yes, 23)yesyesyesno, 24)yes, 23)no++, 51)yes, 25)
nichol.as: Does it have unit tests?yesyesnoyes++, 52)yesyesyesyes, 53)
provides generic timeout for any block of codeyesyesyesnonoyesnoyes
provides synchronization primitives (e.g. semaphore, codition variable)yes--, 54)yesnoyesnoyesnono
lets the programmer control event delivery order (e.g. with priorities)nononononononono
provides callbacks (links) when some work is finishednoyesyes, 12)yes, 12)yes, 12)yesyes, 12)no
has a high-level, comprehensive, consistent network programming frameworknoyesyes--, 55)yesno, 56)yes--, 55)yesno
has non-blocking select.select() implementationnoyesno, 12)no, 12)no, 12)yesno, 12)yes
implements some application-level protocols beyond HTTP, WSGI and DNSyes, 57)no++, 58)yes, 59)yes++, 60)nono++, 58)yes, 61)no++, 62)
compatible with other non-blocking systems in the same processno, 63)yes, 64)no, 63)yes++, 65)no, 63)no++, 66)yes, 67)yes++, 68)

Comments:
1) thread pool or Twisted
2) built-in
3) evdns
4) evdns or its equivalents: minihdns or evhdns
5) if Twisted is available
6) does not implement the WSGI spec properly (no write or close)
7) by manually monkey-patching socket and thread
8) has its own simple HTTPServer class
9) BaseHTTPServer
10) BaseHTTPServer, CherryPy, web.py, Google webapp
11) has partial, incomplete and incomatible greenlet emulation
12) event-driven
13) has partial and incompatible Stackless Python emulation
14) evbuffer
15) no SSL support
16) client and server
17) in the Tasklet class
18) sliently ignores exceptions and keeps the connection hanging
19) the process exits
20) needs evdns
21) uses greenlet
22) calls stackless.schedule() from C code
23) uses libevent
24) but uses poll and epoll could be added asily
25) can use libev or libevent
26) MIT
27) Apache
28) GNU GPL v2 or later
29) Apache License, Version 2.0
30) exensive
31) README and docstrings
32) Hyves.nl (a Dutch social networking site)
33) Second Life (a virtual reality game)
34) FriendFeed (a social networking site)
35) pyftpdlib, supervisor [link] (via medusa)
36) many, [link] and [link]
37) Naali, TAMS, website-profiler, kdb IRC bot, python-brisa uPnP
38) huge
39) not since 2009-11-19
40) at facebook
41) lots
42) the author writes Syncless-related stuff to his blog
43) GIT on github
44) Mercurial on bitbucket
45) in its own Subversion + Trac repository
46) in Python's Subversion
47) Mercurial on bitbuckig
48) Subversion on Google Code
49) not tested in production for robustness and speed
50) limited
51) it can be enabled manually
52) extensive
53) unit tests cover only parts of the functionality
54) provides queues
55) not comprehensive
56) low-level
57) MySQL client
58) Python console backdoor, can monkey-patch external modules
59) storage server compatible with Amazon S3; OpenID
60) tons of protocols
61) pygame, GTK+, inotify and pygame
62) can monkey-patch external modules
63) not by default
64) has Twisted reactor
65) many including GTK+ and Qt
66) not by default, but there is the project gTornado
67) pygame and GTK
68) has monkey-patching for Concurrence, Eventlet, Tornado, Twisted, gevent, circuits and asyncore

2010-04-29

Google Code Data API

This blog post documents the options for programmatic (non-web-browser) access to open source project hosting at Google Code.

As of 2010-04-29, there is no full Google Code Data API, but there is an API for uploading files and an API for the issue tracker.

For uploading files, the following scripts can be useful:

2010-04-19

Syncless: Non-blocking WSGI server and networking library for Stackless Python

This blog post is a release announcement of the software named Syncless I've been developing recently for high-performance, low-memory, single-process, single-threaded, lock-free network communication (TCP/IP client and server) in Python, using coroutines for apperent parallel processing. I needed something which needs less memory than pre-forking worker instances, but doesn't require complicated and error-prone synchronization mechanisms (such as mutexes in a multi-threaded program).

Syncless is an experimental, lightweight, non-blocking (asynchronous) client and server socket network communication library for Stackless Python 2.6. For high speed, Syncless uses libevent, and parts of Syncless' code is implemented in C (Pyrex). Thus Syncless can be faster than many other non-blocking Python communication libraries. Syncless contains an asynchronous DNS resolver (using evdns) and a HTTP server capable of serving WSGI applications. Syncless aims to be a coroutine-based alternative of event-driven networking engines (such as Twisted and FriendFeed's Tornado), and it's a competitor of gevent, pyevent, Eventlet and Concurrence.

Version 0.03 of Syncless has been released. It is a complete rewrite so it has drop-in replacements for built-in Python socket.socket, socket.gethostbyname (etc.), ssl.SSLSocket, time.sleep and select.select. The replacement implementations... is easily monkey-patchable, so legacy (blocking) pure Python I/O code can be made non-blocking without a code change. Core functionality is now written in Pyrex/Cython for greater speed.

2010-03-27

Python web framework of the day

Since there are quite a lot of web application frameworks for Python, I think one more wouldn't hurt.

I was impressed by the compactness of Ruby's Sinatra:

require 'rubygems'
require 'sinatra'
get '/' do
  'Hello world!'
end

My goal was to create something similar for Python, as a proof-of-concept. Here is the equivalent Hello, world web application with my Python syntax:

from wfotd import *
@GET
def _():
  return 'Hello world!'

The Ruby solution looks much more clear and beautiful. Nevertheless, it was a nice Python coding experiment. I named it PYthon Web Framework Of The Day. The source is available at http://code.google.com/p/pts-mini-gpl/source/browse/#svn/trunk/pywfotd.

Here is a larger Python example, demonstrating the scoping and argument passing features of pywfotd:

from wfotd import *
@GET
def _(name=None):
  name = name or 'World'
  return 'Hello, %s!' % name

@GET
def foo():
  return 'This is /foo'

class bar:
  class baz:
    @GET
    def _():
      return 'This is /bar/baz'
    @GET
    def quux():
      return 'This is /bar/baz/quux'

2010-01-08

Emulating Stackless Python using greenlet

This blog post documents why and how to emulate Stackless Python using greenlet.

Stackless Python is an extended version of the Python language (and its CPython reference implementation). New features include lightweight coroutines (called tasklets), communication primitives using message passing (called channels), manual and/or automatic coroutine scheduling, not using the C stack Python function calls, and serialization of coroutines (for reloading in another process). Stackless Python could not be implemented as a Python extension module – the core of the CPython compiler and interpreter had to be patched.

greenlet is an extension module to CPython providing coroutines and low-level (explicit) scheduling. The most important advantage of greenlet over Stackless Python is that greenlet could be implemented as a Python extension module, so the whole Python interpreter doesn't have to be recompiled in order to use greenlet. Disadvantages of greenlet include speed (Stackless Python can be 10%, 35% or 900% faster, depending on the workflow); possible memory leaks if coroutines have references to each other; and that the provided functionality is low-level (i.e. only manual coroutine scheduling, no message passing provided).

greenstackless, the Python module I've recently developed, provides most of the (high-level) Stackless Python API using greenlet, so it eliminates the disadvantage of greenlet that it is low-level. See the source code and some tests (the latter with tricky corner cases). Please note that although greenstackless is optimized a bit, it can be much slower than Stackless Python, and it also doesn't fix the memory leaks. Using greenstackless is thus not recommended in production environments; but it can be used as a temporary, drop-in replacement for Stackless Python if replacing the Python interpreter is not feasible.

Some other software that emulates Stackless using greenlet:

  • part of Concurrence: doesn't support stackless.main, tasklet.next, tasklet.prev, tasklet.insert, tasklet.remove, stackless.schedule_remove, doesn't send exceptions properly. (Because of these features missing, it doesn't pass the unit test above.)
  • part of pypy doesn't support stackless.main, tasklet.next, tasklet.prev, doesn't pass the unit test above.

For the other way round (emulating greenlet using Stackless), see greenlet_fix (source code). Although Stackless is a bit faster than greenlet, the emulation in greenlet_tix makes it about about 20% slower than native greenlet.

My original purpose for this emulation was to use gevent with Stackless and see if it becomes faster (than the default greenlet). It turned out to become slower. Then I benchmarked Concurrence (with libevent and Stackless by default), pyevent with Stackless, pyevent with greenlet, gevent (with libevent and greenlet by default), Syncless (with epoll and Stackless by default), eventlet and Tornado (using callbacks), and found out the following:

  • Syncless was the fastest in my nonscientific benchmark, which was suprising since it had a clumsy event loop implementation in pure Python.
  • Wrapping libevent's struct evbuffer in Pyrex for line buffering is slower than manual buffering.
  • Using input buffering (for readline()) is much slower than manual buffering (reading until '\n\r\n' and splitting the input by \n).
  • Providing a proper WSGI environment is much slower than ad-hoc parsing of the first line of the HTTP request.
  • Wrapping libevent's struct evbuffer and the coroutine switching in Pyrex made it about 5% faster than manual buffering.
  • Syncless does too much unnecessary communication (using Stackless channels) between a worker and a main loop. This can be simplified and made faster using stackless.schedule_remove(). So the current Syncless implementation is a dead end, it should be rewritten from scratch.

My conclusion was that in order to get the fastest coroutine-based, non-blocking, line-buffering-capable I/O library for Python, I should wrap libevent (including event registration and I/O buffering) and Stackless and the WSGI server in hand-optimized Pyrex, manually checking the .c file Pyrex generates for inefficiencies. I'll be doing so soon.

2010-01-03

Does string append run in linear time in Python?

In Python 2.x, the string append operation (string1 += string2) runs in average linear time (in the length of string2) iff there is only one reference to string1.

Example (leave exactly one of the lines in the loop body uncommented):

s = 'v' 
h = {'k': 'v'} 
for i in xrange(int(__import__('sys').argv))): 
  s += 'v'  # linear (one reference to s) 
  t = s; s += 'v'  # quadratic (two references to s) 
  h['k'] += 'v'  # quadratic (two intermediate references to h['k']) 
  s = h['k']; h['k'] = ''; s += 'W'; h['k'] = s  # linear 
  s = h.pop('k'); s += 'W'; h['k'] = s  # linear

2009-12-19

Experimental HTTP server using Stackless Python

This blog post documents my experiment to write a non-blocking HTTP server based on coroutines (tasklets) of Stackless Python. My goal was to write a minimalistic web server server which can handle cuncurrent requests by using non-blocking system calls, multiplexing with select(2) or epoll(2), returning a simple Hello, World page for each request, using the coroutines of Stackless Python. I've done this, and measured its speed using ApacheBench, and compared it to the Hello, World server of Node.js.

The code is here: http://code.google.com/p/pts-mini-gpl/source/browse/#svn/trunk/pts-stackless-httpd http://syncless.googlecode.com/svn/trunk/benchmark.old/

Relevant ApacheBench spee results (for ab -n 100000 -c 50 http://127.0.0.1:.../):

Notes about the speed measurements:
  • I was using a recently compiled Stackless Python 2.6 and a recently compiled psyco for JITting.
  • I was surpriesed that my experimental code using select(2) and Stackless Python is faster than Node.js (by a factor of 1.925 on average, and the worst-case times are faster as well).
  • The speed comparison is not fair since Node.js has a real HTTP server protocol implementation, with its overhead, and my code just skips the HTTP header without parsing it.
  • Setting the TCP socket listen queue size to 100 (using listen(2)) made a huge difference on the worst case connection time. Compared to the setting of 5, it reduced the worst-case connection time from 9200 ms to 23 ms (!) in the measurement.
  • The source code of both servers can be found in the repository above.
  • My conclusion about the speed measurements is that a HTTP server based on Stackless Python and epoll(2) can be a viable alternative of Node.js. It would be worthwhile implementing one, and then doing proper benchmarks.

The advantage of using Stackless Python over callback-based solutions (such as Node.js in JavaScript, Twisted and Tornado) is that one can implement a non-blocking TCP server without being forced to use callbacks.

The unique advantage of Node.js over other solutions is that in Node.js not only socket communication is non-blocking, but DNS lookups, local filesystem access and other system calls as well – Node.js is non-blocking by design, but with other frameworks the programmer has to be careful not to accidentally call a blocking function. Avoiding a blocking function is especially cumbersome if a library used only provides a blocking interface.

Update: I've created a HTTP server capable of running WSGI applications. I've also integrated dnspython as an asynchronous DNS resolver. See it as project Syncless.

Update: Added (web.py) and CherryPy support.

Update: I've realized that the functionality of Syncless has already been implemented many times in Python. Examples: Concurrence, eventlet, gevent. See the comparison.

Minimalistic client for Amazon S3 (AWS) in Python

I've written py-mini-s3, a minimalistic client for Amazon S3 (AWS) in pure Python. It supports the GET and PUT operations. See its source code.

The implementation is based on the documentation http://docs.amazonwebservices.com/AmazonS3/latest/index.html?RESTAuthentication.html

2009-11-08

How to convert a Unix timestamp to a civil date

Here is how to convert a Unix timestamp to a Gregorian (western) civil date and back in Ruby:

# Convert a Unix timestamp to a civil date ([year, month, day, hour, minute,
# second]) in GMT.
def timestamp_to_gmt_civil(ts)
  t = Time.at(ts).utc
  [t.year, t.month, t.mday, t.hour, t.min, t.sec]
end

# Convert a civil date in GMT to a Unix timestamp.
def gmt_civil_to_timestamp(y, m, d, h, mi, s)
  Time.utc(y, m, d, h, mi, s).to_i
end

Here is a pure algorithmic solution, which doesn't use the Time built-in class:

# Convert a Unix timestamp to a civil date ([year, month, day, hour, minute,
# second]) in GMT.
def timestamp_to_gmt_civil(ts)
  s = ts%86400
  ts /= 86400
  h = s/3600
  m = s/60%60
  s = s%60
  x = (ts*4+102032)/146097+15
  b = ts+2442113+x-(x/4)
  c = (b*20-2442)/7305
  d = b-365*c-c/4
  e = d*1000/30601
  f = d-e*30-e*601/1000
  (e < 14 ? [c-4716,e-1,f,h,m,s] : [c-4715,e-13,f,h,m,s])
end

# Convert a civil date in GMT to a Unix timestamp.
def gmt_civil_to_timestamp(y, m, d, h, mi, s)
  if m <= 2; y -= 1; m += 12; end
  (365*y + y/4 - y/100 + y/400 + 3*(m+1)/5 + 30*m + d - 719561) *
      86400 + 3600 * h + 60 * mi + s
end
t = Time.now.to_i
fail if t != gmt_civil_to_timestamp(*timestamp_to_gmt_civil(t))

Please note that most programming languages have a built-in function named gmtime for doing timestamp_to_gmt_civil's work. Please note that Python has calendar.timegm for doing gmt_civil_to_timestamp's work. Please note that most programming languages have a built-in function named mktime for converting a local time to a timestamp (compare with gmt_civil_to_timestamp, which accepts GMT time as input).

Please be aware of the rounding direction of division for negative numbers when porting the Ruby code above to other programming languages. For example, Ruby has 7 / (-4) == -2, but many other programming languages have 7 / (-4) == -1.

The algorithmic solution above is part of the programming folklore. See more examples at http://www.google.com/codesearch?q=2442+%2F\+*7305

2009-11-07

AES encpytion in Python using C extensions

This blog post gives example code how to do AES encryption and decryption in Python. The crypto implementations shown are written in C, but they have a Python interface. For pure Python implementations, have a look at Python_AES in tlslite or SlowAES.

Example Python code using the AES implementation in alo-aes:

#! /usr/bin/python2.4
# by pts@fazekas.hu at Sat Nov  7 11:45:38 CET 2009

# Download installer from http://www.louko.com/alo-aes/alo-aes-0.3.tar.gz
import aes

# Must be 16, 24 or 32 bytes.
key = '(Just an example for testing :-)'

# Must be a multiple of 16 bytes.
plaintext = 'x' * 16 * 3

o = aes.Keysetup(key)
# This uses ECB (simplest). alo-aes supports CBC as well (with o.cbcencrypt).
ciphertext = o.encrypt(plaintext)

assert (ciphertext.encode('hex') == 'fe4877546196cf4d9b14c6835fdeab1a' * 3)
assert len(ciphertext) == len(plaintext)
assert ciphertext != plaintext  # almost always true
decrypted = o.decrypt(ciphertext)
assert plaintext == decrypted

print 'benchmarking'
plaintext = ''.join(map(chr, xrange(237)) + map(chr, xrange(256))) * 9 * 16
assert len(plaintext) == 70992
for i in xrange(1000):
  assert plaintext == o.decrypt(o.encrypt(plaintext))

Example Python code using the AES implementation in PyCrypto:

#! /usr/bin/python2.4
# by pts@fazekas.hu at Sat Nov  7 12:04:44 CET 2009
# based on
# http://www.codekoala.com/blog/2009/aes-encryption-python-using-pycrypto/

# Download installer from
# http://ftp.dlitz.net/pub/dlitz/crypto/pycrypto/pycrypto-2.1.0b1.tar.gz   
from Crypto.Cipher import AES

# Must be 16, 24 or 32 bytes.
key = '(Just an example for testing :-)'

# Must be a multiple of 16 bytes.
plaintext = 'x' * 16 * 3

o = AES.new(key)
# This uses ECB (simplest). PyCrypto aes supports CBC (with AES.new(key,
# aes.MODE_ECB)) as well.
ciphertext = o.encrypt(plaintext)

assert (ciphertext.encode('hex') == 'fe4877546196cf4d9b14c6835fdeab1a' * 3)
assert len(ciphertext) == len(plaintext)
assert ciphertext != plaintext  # almost always true
decrypted = o.decrypt(ciphertext)
assert plaintext == decrypted

print 'benchmarking'
plaintext = ''.join(map(chr, xrange(237)) + map(chr, xrange(256))) * 9 * 16
assert len(plaintext) == 70992
for i in xrange(1000):
  assert plaintext == o.decrypt(o.encrypt(plaintext))

Please note how similar the two example codes are. (Only the import and the object creation are different.)

On an Intel Centrino T2300 1.66GHz system, PyCrypto seems to be about 8% faster than alo-aes. The alo-aes implementation is smaller, because it contains only AES.