Thursday, May 19, 2011

Pure Python physics?

Recently I tried to get pybox2d compiled and running with PyPy, but it failed for whatever reason. I had it in my head to just give up and port the whole bloody thing to pure Python and see how that would go. Not wanting to waste inspiration, I gave that a shot. Fast forwarding to today, and it's pretty much done.

So, what's in the port? (almost) All features of Box2D up to r175:
  • All shape types (polygons, circles, edges, and loops)
  • All joint types (distance, revolute, friction, prismatic, weld, rope, wheel, mouse, pulley, gear)
  • A dynamic tree class for AABBs
  • Collisions, contacts, shape distance calculations, etc.
  • A mostly Pythonic interface to all of the useful classes
  • The testbed, in simplified form

Before you jump for joy, there are many reasons to not use this:

  • It's slow.
  • It's really slow.
  • It has a bad name (pypybox2d -- changing it is on the TODO list)
  • It's still early, though almost everything has worked well enough in my tests. I'm positive there are plenty of typos and minor bugs.
  • Some source files are much too big, because I haven't gotten around to splitting them up
  • It uses __slots__ for my own debugging purposes. I have no qualms about (almost wholly) removing their usage eventually.
"Why do this, then, if it's so slow and unusable?" Do you really have to ask? Why not?

So, what of PyPy after all that hard work? Unfortunately, in my few tests, it's been slower than CPython **. Perhaps one day it will be sped up by another means. Or maybe I'll figure out a way to get Cython working with it such that it won't totally screw everything up. If you have any ideas, please do get in touch with me.

Find the source here. The testbed requires pygame, but there's a (very) simple graphic-less in examples/ if you simply must run something.

** edit May 20th: But those tests were superficial, only testing for a few iterations. Increase the number of iterations, and PyPy wins, hands-down. See Antonio Cuni's helpful comment below for more information.


  1. I tried to benchmark this against PyPy, using a modified version of
    which does an arbitrary number of steps().

    I benchmarked the pure python implementation both on CPython 2.7.1 and PyPy
    fa691b4a848a (which is more or less the same as 1.5, with few improvements).

    Here are the results (in seconds) for an increasing number of iterations:

    CPython 2.7.1:
    100: 0.01
    1000: 0.14
    10000: 1.3
    100000: 13.19
    1000000: 131.54

    100: 0.05
    1000: 0.5
    10000: 2.06
    100000: 6.54
    1000000: 51.22

    As you can see, the JIT takes longer to warmup, but at the end it produces
    better code.

    Then, I looked at the code produced by the PyPy JIT, and saw that it spent a
    lot of time inside, which is caused by the "isinstance(other,
    numbers.Number)" calls in It turns out that the caching done by
    __instancecheck__ is very JIT-unfriendly. So, I removed the reference to ABCs
    by adding this line to the top of

    numbers.Number = (float, int, long)

    On CPython, it gives a small improvement (~12%), but on PyPy it makes it 2.3
    times faster!

    CPython 2.7.1 w/o ABCs
    100: 0.01
    1000: 0.12
    10000: 1.16
    100000: 11.81
    1000000: 117.39

    PyPy w/o ABCs
    100: 0.04
    1000: 0.44
    10000: 1.7
    100000: 3.64
    1000000: 22.25

    Finally, I started to play a bit with the various JIT parameters, and found a
    way to improve performances even further, by passing "--jit trace_limit=20000" to PyPy:

    PyPy w/o ABCs --jit trace_limit=20000
    100: 0.05
    1000: 0.46
    10000: 1.61
    100000: 3.03
    1000000: 16.83

    So, at then end of the day, on the long run PyPy is at least 7.8 times faster
    than CPython. Still a lot slower than the C++ version, though :-(

  2. Very interesting, Antonio. I'll admit my original PyPy tests were rather artificial. It does make sense that the JIT needs some time to "warm up".

    I'll change it around to remove the numbers.Number dependency. Breaking up the functions further would even allow the removal of isinstance in some cases, at the expense of some ease of use for the vector/matrix classes.

    Thanks for the reply. :)

  3. note that, although on the long run pypy is faster, we are still not satisfied with the result. We should do much better, and have a faster warmup time. PyPy is full of heuristics to determine what to compile and when, and apparently they just don't work too well with this example. Hopefully, next versions of PyPy will be better :-)

  4. Sure, I understand. I'll definitely be testing it with every new release of PyPy to see how it's progressing.

  5. I'd be willing to help with Cython. I've done a reasonable amount of coding in it, including a few projects to wrap existing libraries.

  6. thouis, that would be great. I unfortunately haven't had too much time to learn Cython yet.

    My one simple test was just converting (it does all of the vector calculations and such) into a compiled Cython module. It gave strange results once, and worked fine the next. Bringing all of the modules together into a Cython library would be an interesting next step.

    Also, if there's any support for simple parallel processing in Cython (since, as far as I understand, we could side-step the GIL), it would be very interesting to parallelize some of the calculations.

  7. This sounds awesome kne. I look forward to a SWIGless pybox2d with no more py2exe issues.

  8. Incredible work, and the PyPy speedups are stunning! This is very exciting!

    As for a name... how about "Soft PyBox2D"?

  9. Cython doesn't support parallelism directly, but it is fairly easy to mark code as not requiring the GIL, allowing multiple python threads to be in the same code block at the same time.

    I'll take a look at cythonizing, and let you know how it goes.

  10. Thanks for the comments, michael and dabski. I can't say I'm a big fan of "soft pybox2d", somehow...

    That's too bad about it not being supported directly. Using normal threads, I can imagine the code would be that much uglier. Thanks for looking into doing I think that with some type annotations there will be significant speedups over pure Python. I look forward to seeing what you come up with!

  11. Just a quick update... on not-so-pure Python physics:

    So I'm making my first attempts at pure C Python types. My inspiration for trying this out, Vec2 and Mat22, were surprisingly easy to write. Without any modification to the rest of the library, changing these two classes have made for a very significant increase in speed (and if they didn't, I'd be rather confused).

    It's interesting to watch the runsnakerun squares shrink away after porting some core functions back to C.

    Anyway, it's rough around the edges, and still somewhat of a separate project from 'pypybox2d'. I'll put it up perhaps on a separate branch on the SVN sometime soon if anyone's interested.

    Perhaps there's some future in a mostly-Python physics library? Probably not, but it's fun to play around with, at least!

  12. Would definitely be interested in the mostly-python version of pypybox2d. Looks like it will suit applications where pybox2d is difficult to get working (e.g. kivy on android), even if performance takes a respectable hit.

    Looking forward to seeing where this part of the project goes.

  13. andy,

    Thanks for the comment. I'm hoping it will eventually be a viable option. Have you tried pymunk on that platform? I'd imagine it has less portability issues than pybox2d.

    In any case, I've put my first attempt at going "mostly Python" (with a bit of C) route up on the SVN, you'll find the short changelog here. The C portion may be a bit messy, but I'm new at pure C extensions, so go easy on me. :)

    It's significantly faster in parts, but still not quite enough for things like the pyramid test and such. I also have yet to test on Linux, so there are likely to be issues with compilation. Please let me know if you run into any.

    % svn checkout pypybox2d