Mainly Tech projects on Python and Electronic Design Automation.

Wednesday, September 02, 2009

J for a Py Guy

I just started reading a little bit more about the J programming language, after a lot of J activity on RC. I read the forward to "J for C Programmers" where I picked out an acknowledgement to Ken Iverson whose name rang a bell.

It seems that the J-language is from the APL family tree, but uses the ASCII character set and with later ideas from function-level programming added. It is good for MIMD machines it says, so I wonder how well it works for todays cheaper four to sixteen cores in a box machines, or whether it needs massively MIMD machines to really shine?

It has tickled my fancy. I'll read some more of J for C.


  1. APL is a funny language. I toyed a bit with it in 1991 and even wrote a small useful program in it in 7 lines of code or so. It would have taken me 40 lines in python, I guess.

    Wow, what a compact language. But it puts all other languages "write the most unreadable perl/python/c code" contests to shame :-)

    Completely vector/matrix based. Really funny.

  2. I am currently reading which seems to be something in J that is not in APL. I am attracted to the whole "proving theorems on programs" thing, and the paper describes a language that can compute and prove theories on the program. All in the same language? I am not very far in, but it is certainly a good mental workout.

    - Paddy.

  3. The currently available implementations of J do not take advantage of multiple cores.

    The design philosophy of the implementers includes "speedups of less than a factor of 2 are not worth the bother". This means that support for 2 core systems was never worthwhile, and support for 4 core systems (with communication overhead and competition from the OS and other programs) would be a dubious proposition.

    With more parallel systems becoming available we will probably eventually see support for multi-core systems. But for the next decade, anyways, one of the biggest issues for such an implementation would probably be choosing which kinds of potential parallelism to ignore.

    (Ideally, you would want to amortize your communication costs with big computational payoffs from your extra cores. This means building a reliable and efficient estimating mechanisms into the core of the language implementation.)

  4. Hi Anon.
    Don't you see an issue here?
    Today, a company can buy a relatively cheap, 2-chip Intel x86 box that looks like 16 CPU's to the operating system, (and on my calculations gives better throughput when running 16 jobs than when running 8 or less in parallel). This is mainstream, today.
    One of my strong reasons in following programming language development is to see what could harness the multi-core present and future of the hardware. If J is looking to address this in 10 years time, then it might "miss the boat"?

  5. The implementors ignore speedups of less than a factor of 2 so they can give other, non-trivial, speedups adequate attention. And, some recent releases have seen factor-of-a-thousand speedups for certain cases.

    Sacrificing big improvements so we can achieve minor gains seems counter productive. And, in recent history at least, the implementation has had plenty of room for improvement.

    Anyways, I am not an implementer of the language, and I can make no guarantees about boat schedules, but I imagine that J will continue to improve.



Subscribe Now: google

Add to Google Reader or Homepage

Go deh too!

Blog Archive