Mainly Tech projects on Python and Electronic Design Automation.

Monday, December 29, 2008

Experiences converting a C lex/yacc parser for Verilog to pure Python

href="http://en.wiktionary.org/wiki/back_in_the_day">Back in
the day, simulators only had point delays, so all gate
libraries had buffers on their I/O. Now, we tend to use path, or
pin-to-pin delays in gate level models things have been made more
difficult for me.

My task was to randomly inject
some stuck-at faults in a large gate level Verilog design for
simulation. Individual gates of the design are connected by nets and
although I can force a net  I cannot force a node as all nodes
on a net are affected by a force on the net, and you cannot get at
individual nodes by looking to force a net  inside  a
particular gate model as their are no input buffers.  The
other problem with forcing nets/nodes using interactive commands of the
simulator is that you have to compile the design sub-optimally as all
the nets have to be accessible which stops the compiler/elaborator from
performing certain speed optimizations on what will be an execution
speed critical set of simulations.

Brainstorming
with a colleague we came up with the concept of modifying the netlist
to insert logic to either pass-through or force its out put to either 1
or 0 from an external control.

So off I went to href="http://embedded.eecs.berkeley.edu/Alumni/pinhong/scriptEDA/">ScriptEDA,
(great page - awful backdrop), and installed his href="http://www-cad.eecs.berkeley.edu/%7Epinhong/scriptEDA/pythongnd-0.35.tar.gz">Python
Interface for Gate-level Netlist Engine., which I then tried
on my gate level netlist.. Unfortunately our gate-level netlist failed
to parse because it used concatenations of nets which the parser
couldn't handle. I would need to extend the parser to support
 net concatenations.

Looking at the
source of the parser it was a C program using href="http://en.wikipedia.org/wiki/Flex_lexical_analyser">lex
and yacc
creating C data structures which were then interfaced to Python via a href="http://www.swig.org/">SWIG generated
wrapper. I don't like  to maintain C programs , and I needed
to flex my parsing abilities so decided to spend a short time
researching a pure Python solution.

PLY
The
obvious choice of  Python parsing tool seemed to be PLY as it
stands for Python Lex Yacc, so a download later and poking around in
the install, I came across href="http://code.google.com/p/ply/source/browse/trunk/example/yply/yply.py?r=43">yply.py:
style="margin-left: 40px;">
yply.py

This example implements a program yply.py that converts a UNIX-yacc
specification file into a PLY-compatible program. To use, simply
run it like this:

% python yply.py [-nocode] inputfile.y >myparser.py

The output of this program is Python code. In the output,
any C code in the original file is included, but is commented out.
If you use the -nocode option, then all of the C code in the
original file is just discarded.

To use the resulting grammer with PLY, you'll need to edit the
myparser.py file. Within this file, some stub code is included that
can be used to test the construction of the parsing tables. However,
you'll need to do more editing to make a workable parser.

Disclaimer: This just an example I threw together in an afternoon.
It might have some bugs. However, it worked when I tried it on
a yacc-specified C++ parser containing 442 rules and 855 parsing
states.
yply.py
worked on converting the ScriptEDA yacc source (after an edit to remove
some comments in the .y file that it could not handle). It saved me a
lot of work and saved me from transcription errors.  I then
wrote the Python lexer based on the C, and created Python
datastructures to   represent the parsed netlist. After
getting everything parsing I experimented until I got the net
concatenation for module ports and instance pins working. All the
classes used for the pares datastructure new how to print themselves in
valid Verilog so I could now read in a gate-level Verilog netlist
 then write it out in my 'standard' form.

I
then documented the parsed datastructures and worked out how to
randomly insert my 'tie' gate in an instance port connection for any
input or output instance port by selective modifications of the parsed
netlist. By using the Python standard difflib module  I
generated the standard form of the unmodified netlist and compared it
to the modified netlist as an aid when debugging .

After
a weeks work I now have a completely Python based tool to
programmatically modify our gatelevel netlists. Their is no C level
maintenance envisaged  and the tool should be flexible enough
to meet future needs - in the past, I may have not contemplated netlist
modification solutions to problems because of a lack of a suitable
parser. PLY turned out to be straight-forward to use

Thursday, December 04, 2008

Debugging vs inserting print statements

After reading this blog entry from Corey Goldberg, you got some people saying use debuggers instead of print statements.



So why can't you put that same print statement in the debugger and have it show up as an inserted print statement in the source? Why can't you do that for any code to insert? Rather than a breakpoint appearing as a blob to the side, you should have the option of it appearing as inserted code, maybe made visually distinct via shading and containing the logic inherent in the breakpoint such as coded indication that it counts down and only fires on the 20'th pass and then executes this code without pausing the run (or not).





A debugger breakpoint that doesn't stop the simulation but prints some text and the value of a few variables could then look like the insertion of a print statement in your source.





If you could make the interface work the other-way round too, i.e. stick the IDE into Add Debug Text mode, then code added would automagically be turned into equivalent statements executed by the debugger, then you could really fight those pesky inserters of print statements for debugging purposes by giving them a debugger whose interface works that way anyway ;-)




</only_slightly_tongue_in_cheek>

Yet Another Blog Entry On Doctests.

Doctests are a current hot Python blog topic. I thought I would point out that many of the bloggers, especially those pointing out its problems, seem to be seasoned Pythoneers, aware of alternatives and so able to compare doctesting with other forms of testing..



I've found that there is a significant number of programmers out there who just don't write reusable and maintainable tests at all. They may test to ensure their program 'works' when they wrote it, but that is all - job done, finito!



Doctests perform the very important task of lowering the entry point to the subject of testing. Novice pythoneers ready for exposure to testing are likely to have used the python shell to "try things out"; it is a small cognitive step to point out that by cutting and pasting those explorations from the shell into a docstring and adding some "magic sauce" to invoke it, you get a test. A trainer can then move on to the "why" of tests with the student thinking "I do most of it anyway", rather than thinking that it is yet more to learn



I am not saying that doctest is without flaws. I am saying that it is still a good tool, without being perfection.

Tuesday, December 02, 2008

Knapsack Problem

The Rosetta Code Blog this week mentioned that entries starting with a K were missing so I reviewed my sources and added a knapsack type problem to RC together with the python solution.



It's not the first time that I've remembered that itertools.product can do arbitrary nested for loops. Before, I would have had to employ recursive function calls.



- Paddy.

Sunday, November 30, 2008

New year. New PC

I'm thinking of retiring my trusty 17" laptop as my main PC and going quad core.I want to do it on the cheap, so was thinking of building from parts for hopefully arounf £400 and adding this monitor, or this TV/monitor. The screens are made by the same company and look very much the same except the TV version has extra inputs and a TV tuner. I wonder if anyone has this TV connected to a PC and can comment?

I don't play graphic-intensive games so can get away with a basic graphics card.

I want the quad cores to try out more parallel programming. I have a VCD to toggle count program that reads a large VCD file . I would like to work out how to accurately do this on the four cores and see how close to linear speed-up I can get. Our regression simulations at work leave behind approximately 20K simulation logs and coverage databases - I'm starting to think of ways of data-mining them which would be aided by parallel processing (most of works compute farm PC's have between 8 and 16 cores).

Priority would go to getting a little X-at-a-time script moved from bash to Python. Given a list of jobs to do - one per line in a file; the original bash script will execute X of them in the background, starting more, when others finish.

So, if you have any ideas of cheap bundles for creating quad-core PC's (In the UK), fire away :-)

- Paddy,

Saturday, November 15, 2008

Smearing the Static vs Dynamic Typing Dichotomy

CPython
needs to ship with a compiler!
There, I've said it. But what
are my reasons?

I truly believe that your
programming language influences the way you go about solving a problem,
and their is a mindset to programming in a dynamic language like
Python/Perl/TCL/AWK/... that can lead to different types solutions (and
even perceived problems), than  programmers in say
Java/C/C++/C#. (OK some of the differences are also down to other
factors such as IDE's and OS which also affect how you solve an issue).

Lately
I have seen href="http://www.25hoursaday.com/weblog/2008/01/02/DoesC30BeatDynamicLanguagesAtTheirOwnGame.aspx">blog
posts  commenting on the 4.0 release of C# will be
adding dynamic type resolution capabilities via a keyword that will
allow C# to finally do Duck Typing.  The href="http://boo.codehaus.org/">Boo language has
done this trick from its outset. Thinking about it though, I don't
think it will make C# programmers style change to mimic that of
dynamic language programmers as they would have to stick their keyword
on every definition and leave behind their perceived safety of run-time
type checking. They will, I think,rather use it as another resource
that they can dip into, forming rules/design patterns on where it fits
best and use it from overwhelmingly statically typed source code.

When
I look critically at Python, (yep - I do do that from time to time),
 Python 3.0 has optional type decorations - but no compiler,
and Python distributions such as Enthoughts, include external modules
to help in creating compiled code to link to Python,, but we need to
have a complete solution out of the box i.e. CPython should be shipped
with a defined method of compiling 'Python-like' source code into
machine code as part of its standard library and enshrined in the
reference guide. That would allow Python programmers to do the opposite
of what the C# programmers can do: write a mainly dynamic program -
with dynamic language idioms, but be able to seamlessly have part of
their recognisable Python code compiled . Maybe all it would take is
the addition of a new static
keyword to apply to classes/functions/modules.

You
would need to think of static/dynamic as a continuum where instead of
having languages bunched at either end, you would have a smearing out
towards the middle.

Existing languages have too much
static vs dynamic baggage, but maybe a new language with type
inferencing and where you were forced to first define the default type
checking regime as either dynamic or static, then be given the option
of declaring particular 'things' as being at either run-time or
compile-time would lead to a  language that might breed new
programmers with more interesting ways to solve problems.

-
Paddy.

Sunday, November 02, 2008

Comments on a "Python Tutorial"

Hi,
I thought I'd go through your tutorial on Python and make some notes if thats OK.



"No ++ / --":

I think this was the same kind of decision as Python not allowing assignments in if conditionals. There is a a high instance of problems caused by confusion over the misuse of increment/decrement operators in languages like C. They can make it harder to read a program.



"Slow in comparison to compiled languages" - (Here comes the dynamic language supporters usual reply): Yes, but often getting something right is much more important and often never requires the speep that Python gives you. People have prototyped in Python and shipped the prototype as it fits all quality metrics in a far shorter development time. I recently had a case of that myself here.



"Whitespace for indentation": Arguably a positive feature. It should read "whitespace for block delineation" or something. Run a pretty printer on say a C source file and it will try and indent it so that the visible level of indentation mirrors the logical structure of the code. Code is hard to read if the indentation does not follow the logic. Python recognises this and goes further in saying that the common use of block start and end delimeters such as {/} or begin/end are superfluous and only help in writing misleading code.



You seem to have missed the importance of docstrings in Python.



range "(good for indexing)": True, but in idiomatic Python you are much more likely to iterate over the members of a collection object rather than create an index using range:

for x in my_list:

rather than:

for i in range(len(my_list)):

....x = my_list[i]



"Functions work in python much like they do any other language": those with only Java and C experience might miss out on the extended argument passing of Python functions if not made aware.



"Classes are more basic than what you will be use to from java": In what way? There maybe less to learn to become equally proficient but I see this as a good thing. Note that Python is dynamic, meaning that, if you wanted to, you can modify classes and instances at run-time. Its not necessarily a X language does classes better than Y. Python does things differently and if you try and constrain Python to how you use OO in Java, you won't get the best from it.



"from os import *": It's a bad Python habit. You want to warn students off this, as maintenance can suffer.



"We can raise our own exceptions": It could be but shouldn't be a string. best to use an exception type.



"Numbers": There are decimals too, which are good for financial calculations.



"Tuples": If each position in a sequence has explicit meaning then you should use a tuple, e.g. a point on a surface might be the tuple (x,y). coordinates of a car would more likely be a list of points.



"Strings": Misses triple quoted strings.



Filter and map, although present, have declined in use due to the later addition of the powerful list comprehension and generator comprehension features.



One-liners in Python: Difficult to do. Usually people write multi-line scripts in an editor. In a shell such as bash which has multi-line string delimeters you might be able to use a multi-line single quote for bash and embed a multi-line python program using Pythons -c argument and restricting your Python to double quotes if needed.



Have fun with Python.



- Paddy.

Paddy3118 said...

Ouch!

My notes are longer than your blog entry :-)


Thursday, October 30, 2008

Dear Americans

Please vote for Obama.



Whoever you vote for - our British Prime Minister has to suck-up to support, and it has been painful, as a Brit, watching what our elected government has had to do these past few years. Unlike the French or the Germans, we have had to go along with so many of your current administrations schemes, and so have been seen as no more than a lickspittle.



It is time to make America great again! (We Brits can't stand the reflected ridicule).



P.S. The obligatory Python angle: If you are going to vote electronically, then use Open Source tools such as pvote.

Tuesday, October 21, 2008

On Regression Test Optimization/ It's My Data

When regression testing a large ASIC using constrained random test techniques (or directed tests for that matter), we collect coverage metrics on each seeded test and routinely rank, (or optimize), the tests to find out what randomly generated tests are 'significant'.

The usual meaning of significant is "what is the smallest set of tests that would give the same overall coverage". The tools don't try every combination of tests, (I have used a simple greedy algorithm). This got me thinking of what other ways I might want to calculate the test rankings.

I came up with two other ways of ranking tests that might be useful:
  1. File centred ranking: Using coverage results expressed on a per-source-file basis, extract the ranking of tests for full coverage of each source file individually; finally rank just the tests contributing to individual files on how they contribute to all files of the design.
  2. Instance centred ranking: Using coverage results expressed on a per-instance basis, extract the ranking of tests for full coverage of each instance individually; finally rank just the tests contributing to individual instances on how they contribute to all instances in the design.
(Lets call what we normally do, "Design centred ranking", as the ranking is looking at all the tests coverage of the whole design).

File and instance centred ranking leave behind useful information for the designers, what tests target parts of the design they are working on. I guess (because this is another thought experiment), that FCR and ICR would never give smaller ranked test lists than DCR.

All three of the above methods of ranking tests could do with flagging which tests provide sole coverage for a cover point. Some tests may cover points that other tests cover, other tests may cover points that only that test covers it might be useful to be able to show this information.

Its my Data


If the tool vendors supported an interface to Python, then data mining within your coverage results would be a lot simpler, and get us part way to the kind of environment Scientists use in SciPy. Expressing test coverage as Python sets, you could roll your own coverage processing.

Friday, October 03, 2008

Max Licenses In Use

I just created the task, an example log file and Python solution for this new programming task on Rosetta Code.

The task is a simplified version of something I solved at work when I thought we were being overcharged.

Thursday, September 25, 2008

How Nerdy are you


I am nerdier than 93% of all people. Are you a nerd? Click here to find out!


They obviously have it wrong. I have a family, and, and ...

Tuesday, September 23, 2008

Python Fractions issue.

There
seems to be a problem/difference in calculating with the new fractions
module when comparing Python 26rc2 and 30rc1

I was
reading the paper " href="http://conference.scipy.org/proceedings/SciPy2008/paper_3/full_text.pdf">Interval
Arithmetic: Python Implementation and Applications" and
thought to try the first example function f(x,y), which needs more
precision than Python floating point provides (on my Windows PC), to
calculate a sensible answer.

Ahah, I thought, lets
try this with Fractions - so I first installed Python 30rc1, typed in
the Function - saw the rubbish result, then used Fractions and got the
'right' result.
I then wondered if fractions had been
back-ported to Python 2.6rc2, found that it had, but got different
results.

As you can see from the table, Python
2.6rc2 has a problem calculating t3 = F(11,2) * y**8 + x/(2*y), where F
 is Fraction.

I wonder where the problem
lies?

style="text-align: left; margin-left: auto; margin-right: auto; font-family: monospace;"
border="4" cellpadding="2" cellspacing="2"> style="white-space: nowrap; text-align: left; vertical-align: top;">Python
3.0rc1 (r30rc1:66507, Sep 18 2008, 14:47:08) [MSC v.1500 32 bit (Intel)]
on
win32
Type "help", "copyright", "credits" or "license" for
more information.
>>> from fractions
import Fraction as F
>>>
def f(x,y):
...     return
(
...            
(333.75 - x**2)* y**6 + x**2 *
...            
(11* x**2 * y**2 - 121 * y**4 - 2)
...            
+ 5.5 * y**8 + x/(2*y))
...
>>>
f(77617.0, 33096.0)
1.1726039400531787
>>>
def f2(x,y):
...    
return (
...            
(333 + F(3,4) - x**2)* y**6 + x**2 *
...            
(11* x**2 * y**2 - 121 * y**4 - 2)
...            
+ F(11,2) * y**8 + x/(2*y))
...
>>>
f2(77617.0, 33096.0)
1.1726039400531787
>>>
f2(77617, 33096)
-0.82739605994682131
>>>
x,y = 77617, 33096
>>> t1 = (333 + F(3,4)
- x**2)* y**6 + x**2
>>> t1
Fraction(-7917110903377385049079188231255750815,
1)
>>> t2 = (11* x**2 * y**2 - 121 * y**4
- 2)
>>> t2
-72586759091903445314
>>>
t3 = F(11,2) * y**8 + x/(2*y)
>>> t3
7.9171113406689609e+36
>>> style="white-space: nowrap; text-align: left; vertical-align: top; width: 50%;">Python
2.6rc2 (r26rc2:66507, Sep 18 2008, 14:27:33) [MSC v.1500 32 bit
(Intel)] on win32
Type "copyright", "credits" or "license()"
for more information.
IDLE
2.6rc2     
>>>
from fractions import Fraction as F
>>>
def f(x,y):
    return (
   
    (333.75 - x**2)* y**6 + x**2 *
   
    (11* x**2 * y**2 - 121 * y**4 - 2)
   
    + 5.5 * y**8 + x/(2*y))

>>>
f(77617.0, 33096.0)
1.1726039400531787
>>>
def f2(x,y):
    return (
   
    (333 + F(3,4) - x**2)* y**6 + x**2 *
   
    (11* x**2 * y**2 - 121 * y**4 - 2)
   
    + F(11,2) * y**8 + x/(2*y))

>>>
f2(77617.0, 33096.0)
1.1726039400531787
>>>
f2(77617, 33096)
style="background-color: rgb(255, 102, 102); font-weight: bold;">Fraction(-1,
1)
>>> x,y = 77617, 33096
>>>
t1 = (333 + F(3,4) - x**2)* y**6 + x**2
>>>
t1
Fraction(-7917110903377385049079188231255750815, 1)
>>>
t2 = (11* x**2 * y**2 - 121 * y**4 - 2)
>>>
t2
-72586759091903445314L
>>> t3
= F(11,2) * y**8 + x/(2*y)
>>> t3
style="background-color: rgb(255, 102, 102); font-weight: bold;">Fraction(7917111340668961361101134701524942849,
1)
>>>

Sunday, August 10, 2008

Monty Hall Problem Simulations

I
had started a href="http://www.rosettacode.org/wiki/Monty_Hall_simulation">new
topic on Rosetta Code on the Monty Hall problem, using an
example I had laying around that I had written in Python to simulate
the effects of different player strategies. The game goes like this:
  • The
    contestant in in front of three  doors that he cannot see
    behind.. 
  • The three doors conceal one
    prize and the rest being booby prizes, arranged randomly.
  • The
    Host asks the contestant to choose a door.
  • The host
    then goes behind the doors where only he can see what is concealed,
    then always
    opens one
    door, out of the other s not chosen  by the contestant, that
    must reveal a booby prize to the contestant.
  • The
    host then asks the contestant if he would like either to stick with his
    previous choice, or switch and choose the other remaining closed door.
It
turns out that if the contestant follows a strategy of always switching
when asked, then he will maximise his chances of winning..

I
then re-wrote the simulator in AWK  (gawk), so there are now
at least two implementations of the simulator on Rosetta Code.,

The
simulator results show that:
  • A strategy of
    never switching wins 1/3rd of the time.
  • A strategy
    of randomly switching wins 1/2 of the time.
  • A
    strategy of always switching wins 2/3rds of the time.
style="margin-left: 40px;">

Tuesday, August 05, 2008

Spiral

This
exercise I thought up myself after doing href="http://paddy3118.blogspot.com/2008/08/zig-zag.html">ZigZag.
"
Produce a spiral array.. A spiral array is a square arrangement of the
first N2 integers, where the numbers increase
sequentially as you go around  the edges of the array
spiralling inwards.

For example, given 5, produce
this array:
style="font-family: monospace;"> 0 
1  2  3  4 style="font-family: monospace;"> style="font-family: monospace;">15 16 17 18  5 style="font-family: monospace;"> style="font-family: monospace;">14 23 24 19  6 style="font-family: monospace;"> style="font-family: monospace;">13 22 21 20  7 style="font-family: monospace;"> style="font-family: monospace;">12 11 10 
9  8

The
Algorithm

Again I choose to define the x axis as the column
numbers increasing linearly left to right; and the y axis being the row
numbers increasing linearly top to bottom; and with the top left corner
being (0,0).
If you think of the outer part of the spiral, it
can be defined as how x and y change, i.e. deltas. First along the top
row dx,dy =
(1,0)
meaning that for successive places x changes by 1
and y does not.
Similarly
for the next straight run , down the RHS: style="font-family: monospace;"> dx,dy = (0,1).
Along
the bottom: dx,dy =
(-1,0)
.
And back to the top along the LHS: style="font-family: monospace;">dx,dy = (0,-1).
The
next time around the loop, dx,dy take on those same values.

When
to stop going in a straight line and change direction?
style="margin-left: 40px;">
  • When you
    would go off the edge of the array
  • If you are about
    to 'step' on a position that is already filled.
When
to stop?
  • When
    you have placed the last integer.


The Program

I have two ways of generating successive dx,dy
values; a formula in lines 19 and 34, and a commented out use of
itertools.cycle in lines 14,17,18 and 33 which could be used instead.
They generate the same sequence.
When to change direction is
done by initializing myarray with None values in line 22 and the
conditional statement at line 29
When to stop is handled by
the outer while loop at line 24.

I added a flag at
line 15 that allows you to trace the calculation


 1 '''
color="#804040"> 2 Author: Donald 'Paddy' McCarthy
color="#804040"> 3 Email: paddy3118@gmail.com
color="#804040"> 4 File: spiral.py
color="#804040"> 5
color="#804040"> 6 Routine creates a square array filled with numbers that increase
color="#804040"> 7 in a spiral pattern into the center.
color="#804040"> 8
color="#804040"> 9 Note: for spiralling out just swap array[x,y] for n**2-1-array[x,y]
color="#804040">10 Note: Vim colourizing plus
color="#804040">11        :%s/\(&nbsp;&nbsp;\)\?  / color="#6a5acd">\1\&nbsp;\&nbsp;/g
color="#804040">12 '''
color="#804040">13
color="#804040">14 #from itertools import cycle
color="#804040">15 explain = True
color="#804040">16 def color="#008080">spiral(n):
color="#804040">17      color="#0000ff">#deltas = cycle( ((1,0),(0,1),(-1,0),(0,-1)) )
color="#804040">18      color="#0000ff">#dx,dy = deltas.next() # Starting increments
color="#804040">19     dx,dy = 1,0             color="#0000ff"># Starting increments
color="#804040">20     x,y = 0,0               color="#0000ff"># Starting location
color="#804040">21     i = 0                   color="#0000ff"># starting value
color="#804040">22     myarray = [[None]* n color="#804040">for j color="#804040">in range(n)]
color="#804040">23      color="#804040">if explain: color="#804040">print " color="#ff00ff">dx,dy ->", dx,dy
color="#804040">24      color="#804040">while i < n**2:
color="#804040">25         myarray[x][y] = i
color="#804040">26          color="#804040">if explain: color="#804040">print " color="#ff00ff">x,y->[x,y]", x,y, myarray[x][y]
color="#804040">27         i += 1
color="#804040">28         nx,ny = x+dx, y+dy
color="#804040">29          color="#804040">if 0<=nx<n color="#804040">and 0<=ny<n color="#804040">and myarray[nx][ny] == None:
color="#804040">30             x,y = nx,ny
color="#804040">31              color="#804040">continue
color="#804040">32          color="#804040">else:
color="#804040">33              color="#0000ff">#dx,dy = deltas.next()
color="#804040">34             dx,dy = (-dy,dx) color="#804040">if dy color="#804040">else (dy,dx)
color="#804040">35              color="#804040">if explain: color="#804040">print " color="#ff00ff">dx,dy ->", dx,dy
color="#804040">36             x,y = x+dx, y+dy
color="#804040">37      color="#804040">return myarray
color="#804040">38 def color="#008080">printspiral(myarray):
color="#804040">39     n = range(len(myarray))
color="#804040">40      color="#804040">for y color="#804040">in n:
color="#804040">41          color="#804040">for x color="#804040">in n:
color="#804040">42              color="#804040">print " color="#ff00ff">%2i" % myarray[x][y],
color="#804040">43          color="#804040">print
color="#804040">44
color="#804040">45 printspiral(spiral(6))

Sample
output:

style="font-family: monospace;">>>>
printspiral(spiral(4)) style="font-family: monospace;"> style="font-family: monospace;">dx,dy -> 1 0 style="font-family: monospace;"> style="font-family: monospace;">x,y->[x,y] 0 0 0 style="font-family: monospace;"> style="font-family: monospace;">x,y->[x,y] 1 0 1 style="font-family: monospace;"> style="font-family: monospace;">x,y->[x,y] 2 0 2 style="font-family: monospace;"> style="font-family: monospace;">x,y->[x,y] 3 0 3 style="font-family: monospace;"> style="font-family: monospace;">dx,dy -> 0 1 style="font-family: monospace;"> style="font-family: monospace;">x,y->[x,y] 3 1 4 style="font-family: monospace;"> style="font-family: monospace;">x,y->[x,y] 3 2 5 style="font-family: monospace;"> style="font-family: monospace;">x,y->[x,y] 3 3 6 style="font-family: monospace;"> style="font-family: monospace;">dx,dy -> -1 0 style="font-family: monospace;"> style="font-family: monospace;">x,y->[x,y] 2 3 7 style="font-family: monospace;"> style="font-family: monospace;">x,y->[x,y] 1 3 8 style="font-family: monospace;"> style="font-family: monospace;">x,y->[x,y] 0 3 9 style="font-family: monospace;"> style="font-family: monospace;">dx,dy -> 0 -1 style="font-family: monospace;"> style="font-family: monospace;">x,y->[x,y] 0 2 10 style="font-family: monospace;"> style="font-family: monospace;">x,y->[x,y] 0 1 11 style="font-family: monospace;"> style="font-family: monospace;">dx,dy -> 1 0 style="font-family: monospace;"> style="font-family: monospace;">x,y->[x,y] 1 1 12 style="font-family: monospace;"> style="font-family: monospace;">x,y->[x,y] 2 1 13 style="font-family: monospace;"> style="font-family: monospace;">dx,dy -> 0 1 style="font-family: monospace;"> style="font-family: monospace;">x,y->[x,y] 2 2 14 style="font-family: monospace;"> style="font-family: monospace;">dx,dy -> -1 0 style="font-family: monospace;"> style="font-family: monospace;">x,y->[x,y] 1 2 15 style="font-family: monospace;"> style="font-family: monospace;">dx,dy -> 0 -1 style="font-family: monospace;"> style="font-family: monospace;"> 0 
1  2  3 style="font-family: monospace;"> style="font-family: monospace;">11 12 13  4 style="font-family: monospace;"> style="font-family: monospace;">10 15 14  5 style="font-family: monospace;"> style="font-family: monospace;"> 9 
8  7  6 style="font-family: monospace;"> style="font-family: monospace;">>>>

Monday, August 04, 2008

Zig Zag


Yet another problem from Rosetta Code.
The problem statement is:
"
Produce a zig-zag array. A zig-zag array is a square arrangement of the first N2 integers, where the numbers increase sequentially as you zig-zag along the anti-diagonals of the array. For example:
 0  1  5  6
 2  4  7 12
 3  8 11 13
 9 10 14 15
Or take a look at this.
"
I read all the other answers and saw a very short  example in the J language, which looked like line noise to me, as well as a fairly short example in Haskel a programming language I don't know, but I got a hint as to how it is done by the mention of sorting indeces.

I printed a sheet of graph paper, drew a 6 by 6  grid and put in the zig-zag numbers, after a few false starts I realised that what distinguishes a diagonal is if the top left is coordinate 0,0; x increases to the right, and y increases  going down the rows, then the sum of the induces is constant for a diagonal.

If we are going to do some sorting then we need to sort first on the sum of the indeces. 

Next, you find that when the sum is odd you are 'going' down the diagonal so increasing y gives the sort order and vice versa, so secondly sort on  -y if the sum of the coordinates is even, otherwise +y.

The sort will give the order of the coordinates enumerate the order and you get what integer goes in which location et voila!.

The code:

The sort is split between the iterable which simply generates the coordinate pairs and the key function  which returns a tuple of the tho elements highlighted above for sorting the coordinates (Its equivalent to the decorate-sort-undecorate or Schwartzian sort method for any non-Pythoneers).  It is the main part of the program:

'''
zigzag algorithm
Description from: http://www.rosettacode.org/wiki/Zig_Zag
Code Author: Donald 'Paddy' McCarthy
'''
def zigzag(n):
 indexorder = sorted(((x,y) for x in range(n) for y in range(n)),
   key = lambda p: (p[0]+p[1], -p[1] if (p[0]+p[1]) % 2 else p[1]) )
 return dict((index,n) for n,index in enumerate(indexorder))
def printzz(myarray):
 n = int(len(myarray)** 0.5 +0.5)
 for x in range(n):
  for y in range(n):
   print "%2i" % myarray[(x,y)],
  print
printzz(zigzag(6)) 
Running the code produces:
 0  1  5  6 14 15
 2  4  7 13 16 25
 3  8 12 17 24 26
 9 11 18 23 27 32
10 19 22 28 31 33
20 21 29 30 34 35

Now to add it to Rosetta Code.

Sunday, August 03, 2008

SEDOL


Whilst browsing Rosetta Code, I came across a good algorithm for training students, SEDOL, so coded up a version in what I would call a very litteral translation of  itsexplanation.
I changed the Python example on Wikipedia, as it did not have a validation function as other examples have, and added my code to the Rosetta page.
'''
From: http://en.wikipedia.org/wiki/SEDOL
SEDOLs are seven characters in length, consisting of two parts:
 a six-place alphanumeric code and a trailing check digit. 
'''
import string
# constants
sedolchars = string.digits + string.ascii_uppercase
sedol2value = dict((ch, n) for n,ch in enumerate(sedolchars))
for ch in 'AEIOU':
    del sedol2value[ch]
sedolchars = sorted(sedol2value.keys())
sedolweight = [1,3,1,7,3,9,1]
def check(sedol):
    return len(sedol) == 7 and \
          &nbspall(ch in sedolchars for ch in sedol) and \
          &nbspsum(sedol2value[ch] * sedolweight[n]
               for n,ch in enumerate(sedol)
               ) % 10 == 0
def checksum(sedol):
   &nbsptmp = sum(sedol2value[ch] * sedolweight[n]
               for n,ch in enumerate(sedol[:6])
               )
    return sedolchars[ (10 - (tmp % 10)) % 10]

sedol = '0263494'
print sedol, checksum(sedol)
print
# From: http://www.rosettacode.org/wiki/SEDOL
for sedol in '''
 710889
 B0YBKJ
 406566
 B0YBLH
 228276
 B0YBKL
 557910
 B0YBKR
 585284
 B0YBKT
 '''.split():
    print sedol + checksum(sedol)
Note:
 I came back to the code after a few hours of sleep and thought initially that I would have trouble in the return statement of  checksum() as I thought I would need a dictionary doing the full reverse matching of what sedol2value, from ints to characters. sedolchars nolonger includes the vowels so using that method for a value over 9 will not work.
Luckily, the result of the calculation is always in the range 0<= n >=9, and within that range, sedolchars[n] works.

Saturday, July 26, 2008

Ranking Modelsim Coverage results using Python for Speed? !


I have a problem with ranking tests at work -it just takes too long and the 'obvious algorithm' should be much quicker. 

The Task

I should explain that we use modelsim for simulating a processor described in Verilog RTL and create PSL assertions and cover statements to help with the verification. Periodically we run coverage simulations and create coverage databases for each test that contains the coverage information for each PSL cover or assert statement.

Our test suite consists of ~2000 'froven' randomly generated tests with seeds we have used before and want to use again, because they give good coverage; and a few thousand newly randomly generated tests.

We want to continue keeping tests that worked well in the past, and add to the kept set of tests any new tests that contribute to the overall coverage of the design.

The Problem

 We have a problem however with the Modelsim 'vcover ranktest' command that we use to perform the ranking. It takes around ten hours to rank ~5000 tests when we were hoping for around one hour.

Around 5000 coverage databases to rank each with around 5000 coverage points (of which less than halk are covered), shouldn't be that big a deal. 

Initial Thoughts

I came up with the following algorithm that I thought would do a good job of ranking tests - it gets around the  size problems of trying every possible permutation of tests for coverage by using a greedy algorithm: choose the test giving most coverage, then the test giving the most incremental additional coverage ...
I put it in an email as:
My algorithm for Ranking tests by coverage.
  • Read in and convert each tests coverage data into its set of coverage points that the test contributes to
  • sum all the contributions of all tests marked as compulsory to form ranking set and remove the tests from the pool
    • keep note of tests added to the ranking set and the order they are added. (for compulsory tests order within compulsory tests is un-important. They all come 'first')
  •  Repeat until pool is empty:
    • pool_max = (0, "" )
    • for test in pool:
      • contribution = number of cover points in test that are not covered already in ranking_set
      • if contribution == 0:
        • drop test from pool into the non-contributory list
      • else if contribution > pool_max.contribution:
        • pool_max = (contribution, test)
    • if pool_max.contribution > 0:
      • add the pool_max test into the ranking set
      • remove the pool_max test from the pool
    • else:
      • empty everything from the pool into the non-contributory list
        (They don't add anything)
  • You end with:
    •  an ordered list of the ranking tests.
    • An (ordered) list of the non-contributing tests.
    • All tests marked as compulsory are in the ranking tests.
By only comparing against the sum of the already contributing tests, and removing non-contributing tests ASAP, I hope to get reasonable run times.
I'd write it myself if their was a python interface to UCDB files. I won't touch it with C though.

Scratching the Itch

I did decide to create my own ranking program last night.

The flow was to dump the PSL assertion coverage from Modelsims Universal Coverage DB, (UCDB) , files. a total of 1701 ucdb files generating 1701 separate textual report files. Then I would create a Python script to broadly follow the algorithm above to create a ranked list of tests from the report files.

Dumping the data from the UCDB's to reports took tenminutes to run.

A little experimentation lead me to read in test names and instance names for coverage points and substitute unique integers for these ASAP in the program as  I had memory problems trying to create 25 million sets keeping set members as long instance names of around 130 chars each.
I found ways to remove tests from the ranking just as soon as they contributed nothing.

My program, is 120 lines of Python and took a minute to run!

So Fast. Must Check.

I did some checks against the 'vcover ranktest' results:
  • One gives an extra 39 new contributing tests, the other an extra 40.
  • Of the extra tests 30 are the same in both programs.
Now I need to get my team-mates to check the Python code.

I would expect some variation in ranked tests as, for example, if two tests are equally contributing the most new coverage, how do you choose? I chose to select first on maximum contribution to coverage; then on minumum simulation  cycles; then on test name.
Once the algorithms make a different choice then there is no mechanism to make them converge.

Endpoint

My flow gets the data from the database at the beginning, then uses it in a n efficient manner. If Modelsims algorithm constantly goes to the UCDB for data then that could be the source of the bottleneck., as just getting the data once takes ten minutes.

- Paddy.

Friday, July 04, 2008

Essence of Duck

A distillation of duck typing

style="margin-left: 40px;"> style="font-family: Comic Sans MS; font-style: italic;">"Substitution
of an object with regard only of the function and signature style="font-family: Comic Sans MS; font-style: italic;">of
its run-time methods style="font-family: Comic Sans MS; font-style: italic;">"

Lets
break it down:
style="font-family: Comic Sans MS; font-style: italic;">Substitution
Its
about the later use of a new object type instead of another .
style="font-family: Comic Sans MS; font-style: italic;">object
Its
about the use of data types.
style="font-family: Comic Sans MS; font-style: italic;">regard
What
needs to be in place for things to work.
style="font-family: Comic Sans MS; font-style: italic;">function
what
the substituting object has to do to fit in.
style="font-family: Comic Sans MS; font-style: italic;">signature
the
substituting objects method signature  needed for
compatability.
style="font-family: Comic Sans MS; font-style: italic;">run-time
It
is what actually happens at run time that matters without the
constraint of what can be determened at compile time.
style="margin-left: 40px;">There is purposefully no mention
of type-checking, class hierarchy checking, in fact any 'look before
you leap' checks. Note too, that I purposefully put function before
signature - too many comments on duck typing  focus on issues
of signature and assume
programmer checks for correct substitutable function are style="font-style: italic;">not part of duck
typing.
- Paddy.

Saturday, June 21, 2008

More Fizz than Buzz

I
downloaded the first beta release of Python 3 which corresponded with
another href="http://groups.google.com/group/comp.lang.python/msg/e7acbdd108a0ea05">thread
on FizzBuzz in c.l.p that featured some slick maths using href="http://en.wikipedia.org/wiki/Fermats_little_theorem">Fermats
little theorem, (yep it's really called that). I learnt a
little about the theorem, did a quick spreadsheet to show that it
worked, but could not grasp how they made it work in their examples.

It
did however get me toying with Py3K where I came up with this snippet
of code from the description of the problem.

The
Problem:
face="Courier, Monospaced">Write a program that prints the
numbers from 1 to 100. But for
multiples of three print
"Fizz" instead of the number and for the
multiples of five
print "Buzz". For numbers which are multiples of
both three
and five print class="fixed_width"> " style="font-family: monospace;">FizzBuzz"
 
My
Py3K solution:
style="margin-left: 40px; font-family: monospace;">
print( ' color="#ff00ff"> '.join(
    ' color="#ff00ff">FizzBuzz' color="#804040">if i%15 == 0
     color="#804040">else ' color="#ff00ff">Buzz' if i%5 == 0
     color="#804040">else ' color="#ff00ff">Fizz' if i%3 == 0
     color="#804040">else str(i)
     color="#804040">for i color="#804040">in range(1,101)))

What
I like about this
solution is it applies the constraints from the problem description in
the (reverse) order that they are given, and does not rely on the
elementary (to pythoneers), knowledge that a test for not(X) is
equivalent to a test of whether X==0. I'd be happy to put this in, say
a Wikipedia type article on FizzBuzz, or teach it to newbies ready for
immersion into gencomps (as maybe the second or third
example). 

- One for trainers.

-
Paddy.

Friday, June 13, 2008

Fun with Wordle

title="Wordle: Go Deh too!"> src="http://www.wordle.net/thumb/Go_Deh_too%21"
style="padding:4px;border:1px solid #ddd"
>


I've been playing...

Saturday, May 24, 2008

Duck Typing Done Right Is Wrong!

A
critique of this
critique of Duck Typing by Henry Story.


Saturday May 26, 2007


Duck Typing done right


Dynamic Languages such as Python, Ruby and Groovy, make a big deal
of their flexibility. You can add new methods to classes, extend
them, etc... at run time, and do all kinds of funky stuff. You can
even treat an object as of a certain type by looking at it's methods.
This is called Duck
Typing
: "If it quacks like a duck and swims like a Duck then
it's a duck", goes the well known saying. The main criticism of
Duck Typing has been that what is gained in flexibility is lost in
precision: it may be good for small projects, but it does not scale.
I want to show here both that the criticism is correct, and how to
overcome it.


Let us look at Duck Typing a little more closely. If something is
a bird that quacks like a duck and swims like a duck, then why not
indeed treat it like a duck? Well one reason that occurs immediately,
is that in nature there are always weird exceptions. It may be
difficult to see the survival advantage of looking like a duck, as
opposed to say looking like a lion, but one should never be surprised
at the surprising nature of nature. (He
stretches the analogy past its usefulness. It's not nature, its a
convenient name to wrap a computing idiom)

Anyway, that's
not the type of problem people working with duck typing ever have.
How come? Well it's simple: they usually limit the interactions of
their objects to a certain context, where the objects being dealt
with are such that if any one of them quacks like a duck, then it is
a duck. And so here we in essence have the reason for the criticism:
In order for duck typing to work, one has to limit the context, one
has to limit the objects manipulated by the program, in such a way
that the duck typing falls out right.(Yes
you do, and this limitation is very rarely seen as a problem).

Enlarge the context,(presumably to
where the objects being dealt with quack like a duck but are not
ducks)
, and at some point you will find objects that don't fit
the presuppositions of your code. So: for simple semantic reasons,
those programs won't scale. The more the code is mixed and meshed
with other code, the more likely it is that an exception will turn
up. The context in which the duck typing works is a hidden
assumption, usually held in the head of the small group of developers
working on the code. In short he is
saying:



  1. Use in a large programs
    will cause objects with the right signature but the wrong actions to
    be successfully called in Duck typing but give the wrong result.


  2. Duck typing only works
    because the developers know what are compatible actions.


  3. Only a small group of
    developers can know what are compatible actions.



A slightly different way of coming to the same conclusion, is to
realize that these programming languages don't really do an analysis
of the sound of quacking ducks. Nor do they look at objects and try
to classify the way these are swimming. What they do is look at the
name of the methods attached on an object, and then do a simple
string comparison. If an object has the swim method,
they will assume that swim stands for the same type of
thing that ducks do. Now of course it is well established that
natural language is ambiguous and hence very context dependent. The
methods names gain their meaning from their association to english
words, which are ambiguous. There may for example be a method named
swim, where those letters stand for the acronym "See
What I Mean"
. That method may return a link to some page on
the web that describes the subject of the method in more detail, and
have no relation to water activities. Calling that method in
expectation of a sound will lead to some unexpected results
But
once more, this is not a problem duck typing programs usually have.
Programmers developing in those languages will be careful to limit
the execution of the program to only deal with objects where swim
stand for the things ducks do. But it does not take much for that
presupposition to fail. Extend the context somewhat by loading some
foreign code, and at some point these presuppositions will break down
and nasty difficult to locate bugs will surface. Once again, the
criticism of duck typing not being scalable is perfectly valid. So
his criticism here is that:



  1. Load 'foreign' code with
    the right method signature and it is likely to fail.











Lets take his points one by one
and show why Duck Typing can and does work for many people:


His point 1:


The
difference between a large and a small program would be that the code
is so large that, objects are passed to functions based solely on
their method signature compatability rather than on what those
methods do, and what a function expects to do to the object
.


As a
programmer you need to know the functionality of what you put
together. Small or large systems –you still need to know, you
should not pass it off, and only testing will show how right you are.


His point 2:


Turn
it on its head “If the developers don't know what are
compatable actions, Duck typing doesn't work”.


Yes,
for Duck typing to work, developers need to know about what they are
linking, but other processes, essential to the production of quality
software will make this tracktable.


His point 3:


Following
on from the answer to his point 3, If you are not building a large
system out of a collection of smaller ones with local, identifiable
interdependancies then it could be a problem but that isn't Duck
typing at fault – You have a mess that no one can understand
but are trying to force development on. Its not the size of the
codebase that is defeating Duck typing it is its haphazard
organization.


His point 4:


It
is wise to be wary of code you don't know the history behind. In
larger projects you are more likely to use foreign code but exploring
it for suitability for Duck typing is part and parcel of good
practice. You should know about what you are using, Duck typing or
not. If you have programmers using code that is still 'foreign' to
them, then you cannot expect quality.






You could go on and read his
solution to a poorly stated problem in his article.







  • Paddy.







On careful reading of his
follow-up
article
, in the section “Scalability and efficiency”,
he turns things on their head by admitting that the original article
was about Duck Typing not being a replacement for Web
URI's
. And they are not. We can agree on that. :-)





Saturday, April 26, 2008

Bug in simple code after 2+ years and 133 comments

I was just reading this blog entry about a simple game of presenting a scrambled list of numbers and you having to reverse the leftmost N numbers until all numbers are in ascending order.

The game is implemented in various languages, including PHP Ruby and Python and most seem to be structured as:
  1. create a randomized list of numbers
  2. while the list is not sorted:
    1. ask for and input the number to be reversed
    2. reverse that portion of the list.
Ignoring any checks on input, it seemed odd that the programs did not check that the initial randomization did not produce a sorted list at step 1 that would cause the inner while loop to be skipped altogether!

Here is proof for Python:

>>> n = 10
>>> sortlist = range(n)
>>> randlist = range(n)
>>> random.shuffle(randlist)
>>> x = 0
>>> while randlist != sortlist:
... x += 1
... random.shuffle(randlist)
...
>>> print x
714418
>>>
If you were testing an implementation or otherwise running it, every once in a while it would just exit. Run it again and it would most likely work for a long time. It is not something I would like to use the normal xUnit type tests/continuous testing regime to uncover as although a transient fail would be noted, those test regimes rely heavily on easy reproducibility of the failure.

I guess what would be needed is knowledge of corner cases. For shuffle, corner cases to think of would include returning the input order, reverse input order, sorted, and reverse sorted order.





- Paddy.






P.S. Work pays me to be this finickity when testing ASICs

Friday, April 11, 2008

That history meme, in Python!


It seemsed that everyone was joining in the 'history meme', and finding out what was in their history:

They were all using unix command line tools, piped together to show the most frequent commands they had used.:
history|awk '{a[$2]++ } END{for(i in a){print a[i] " " i}}'|sort -rn|head

This being a Python blog, and knowing before-hand that Python is really awful at one-liners, I nevertheless decided that it would be of some use to try and pipe history to a Python one-liner that performed a similar task .

I came up with:
bash$ history | python  -c 'import sys,itertools,pprint; 
    pprint.pprint(sorted([
        (len(list(g)),k) for k,g in itertools.groupby(sorted([
            x.split()[1]for x in sys.stdin if len(x.split())>1])
            , lambda x:x)
        ])
        [-10:][::-1])' 
[(63, 'echo'),
 (41, 'history'),
 (30, './tst1.sh'),
 (28, 'declare'),
 (21, 'python'),
 (18, 'perl'),
 (18, 'cat'),
 (16, 'ls'),
 (15, 'xterm'),
 (15, 'history|python')]
bash$ 
I decided to break the command into multiple lines for readability above; it was developed, painfully, all on one line.

So readers, the above is something that Python stinks at - one-liners. (Unless you know a better way in Python ...)

- Paddy.

Tuesday, March 25, 2008

Writing a VCD to toggle-count generator in Python

I have an interesting problem at work which has been taxing me before
the Easter break. One of the less traditional ways forward is to write
a toggle count utility - something to take a simulation of a
hardware design and count which nets transition both to a zero and a
one in the simulation. For various reasons I could not get the design
to simulate in a modern version of a simulator with in-built toggle
counting without having to rebuild a C-based testbench for a later
version of an OS and a simulator that a major part of the TB is not
certified on.



I decided to spend my Easter Monday writing a VCD
to toggle-count
utility.



I started by searching my C: drive for a non-trivial vcd file and found
a ~200k/300 net sample generated from the Icarus Verilog program. I
then googled for the file format and found the IEEE 1394-2001 document
and went for that. I googled for ready-made utilities too and found an
interesting document on mining VCD files and parsing RTL to generate
all sorts of coverage metrics, but the VCD readers I found were in C
and, e.g. GTKWave
, and didn't seem to clarify how to write a vcd reader. (OK,
specifically - the liberal sprinkling of goto's in GTKWave sources put
me off, there - I've said it :-).



I moved on to read the IEEE spec, peruse my example vcd files, reject
writing an evcd (extended VCD), parser as I could
later extend a VCD parser, and I think the works simulator has enough
controls to tailor the VCD generated to be what I want.



The VCD files I will eventually have to work on may be a gigabyte in
size so I want to step through the file one token at a time gathering
stats as I go, and my reading of the VCD file format seemed to show
that that was possible, so the fist line of Python I wrote was this
line:



175   tokeniser = (word for line in f for word in line.split() if word)


The above generator
comprehension
, just gives successive words from the VCD file
without storing a huge chunk of the file in memory. tokeniser worked
originally with :

154 keyword2handler = {
155 # declaration_keyword ::=
156 "$comment": drop_declaration,
157 "$date": vcd_date,
158 "$enddefinitions": vcd_enddefinitions,
159 "$scope": vcd_scope,
160 "$timescale": vcd_timescale,
161 "$upscope": vcd_upscope,
162 "$var": vcd_var,
163 "$version": vcd_version,
164 # simulation_keyword ::=
165 "$dumpall": vcd_dumpall,
166 "$dumpoff": vcd_dumpoff,
167 "$dumpon": vcd_dumpon,
168 "$dumpvars": vcd_dumpvars,
169 "$end": vcd_end,
170 }

and the 'switch statement':

176   for count,token in enumerate(tokeniser):
181 keyword2handler[token](tokeniser, token)

... to form the guts of the program but when I had fleshed out all the
declaration keyword handlers beyond mere stubs, I decided to
add the second half of the parse routine as I realised that I would
skip all the simulation keywords and only needed to concentrate on the
first character of a token to determine what to do. This led to the
finished form of function vcd_toggle_count



I had thought that the rules for left-expanding a VCD value to a larger
word size might lead to a compact Python solution and i was pleased
with my result, which I tested in the Python interpreter as:

>>> for
number in "10 X10 ZX0 0X10".split():

...
extend = size-len(number)

...
print "%-4s -> %s" % (number,
('0' if number[0]=='1' else number[0])*extend + number)

...
10
-> 0010

X10
-> XX10

ZX0
-> ZZX0

0X10 -> 0X10


.This became:


 81       extend = stats.size - len(number)
82 if extend:
83 number = ('0' if number[0]=='1' else number[0])*extend + number




Well, enough prattling on about how I wrote the program, the full
source is below, and I'm off to work to to try it for real. It should
work, and since I have done no optimisations for speed as yet , I am
confident that I could get acceptable run-times out of its
variants to handle the gigabyte VCD file if I get one.



  1 #!python
2 '''
3 Extract toggle count from vcd file
4
5 Refer to IEEE standard 1364 2001
6 (http://inst.eecs.berkeley.edu/~cs150/ProtectedDocs/verilog-ieee.pdf)
7
8 Author Donald 'Paddy' McCarthy (C) 24 March 2008
9 '''
10
11 from __future__ import with_statement
12 from itertools import dropwhile, takewhile, izip
13 from collections import defaultdict
14 from pprint import pprint as pp
15
16 vcdfile = r"C:\cygwin\home\HP DV8025EA\tmp\ivtest_v1.0\test_div16.vcd"
17
18 class VCD(object):
19 def __init__(self):
20 self.scope = []
21 self.idcode2references = defaultdict(list)
22 self.reference2idcode = dict()
23 self.enddefinitions = False
24 self.id2stats = dict() # Maps id to its accumulated statistics
25 def textstats(self):
26 total, updown, uponly, downonly = 0,0,0,0
27 out = []
28 for ref in sorted(self.reference2idcode.keys()):
29 id = self.reference2idcode[ref]
30 stats = self.id2stats[id]
31 if stats.size == 1:
32 total +=1
33 if stats.zero2one and stats.one2zero:
34 updown +=1
35 covered = 'PASS'
36 elif stats.zero2one:
37 uponly +=1
38 covered = 'FAIL0'
39 elif stats.one2zero:
40 downonly +=1
41 covered = 'FAIL1'
42 else:
43 covered = 'FAIL10'
44 out.append( " %-50s %s" % ( '"'+".".join(x[1] for x in ref)+'":', (covered, stats.zero2one, stats.one2zero)) )
45 else:
46 total += stats.size
47 for count, (one2zero, zero2one) in enumerate(izip(stats.one2zero, stats.zero2one)):
48 if zero2one and one2zero:
49 updown +=1
50 covered = 'PASS'
51 elif zero2one:
52 uponly +=1
53 covered = 'FAIL0'
54 elif stats.one2zero:
55 downonly +=1
56 covered = 'FAIL1'
57 else:
58 covered = 'FAIL10'
59 name = ".".join( x[1] for x in (ref+(('BIT:','<'+str(count)+'>'),)) )
60 out.append( " %-50s %s" % ( '"'+name+'":', (covered, zero2one, one2zero)) )
61 header = "# TOGGLE REPORT: %g %%, %i / %i covered. %i up-only, %i down-only." % (
62 updown/1.0/total*100, updown, total, uponly, downonly )
63 body = "toggle={\n" + "\n".join(out) + '\n }'
64 return header, body
65
66 def scaler_value_change(self, value, id):
67 if value in '01' :
68 stats = self.id2stats[id]
69 if not stats.value:
70 stats.value = value
71 elif stats.value != value:
72 stats.value = value
73 if value == '0':
74 stats.one2zero +=1
75 else:
76 stats.zero2one +=1
77
78 def vector_value_change(self, format, number, id):
79 if format == 'b':
80 stats = self.id2stats[id]
81 extend = stats.size - len(number)
82 if extend:
83 number = ('0' if number[0]=='1' else number[0])*extend + number
84 newdigit, newone2zero, newzero2one = [],[],[]
85 for digit, olddigit, one2zero, zero2one in izip(number, stats.value, stats.one2zero, stats.zero2one):
86 if digit in '01' and olddigit and olddigit != digit:
87 if digit == '0':
88 one2zero +=1
89 else:
90 zero2one +=1
91 elif digit not in '01':
92 digit = olddigit
93 newdigit.append(digit)
94 newone2zero.append(one2zero)
95 newzero2one.append(zero2one)
96 stats.value, stats.one2zero, stats.zero2one = newdigit, newone2zero, newzero2one
97
98
99 class IdStats(object):
100 def __init__(self, size):
101 size = int(size)
102 self.size = size
103 if size ==1:
104 self.value = ''
105 self.zero2one = 0
106 self.one2zero = 0
107 else:
108 # stats for each bit
109 self.value = ['' for x in range(size)]
110 self.zero2one = [0 for x in range(size)]
111 self.one2zero = [0 for x in range(size)]
112 def __repr__(self):
113 return "<IdStats: " + repr((self.size, self.value, self.zero2one, self.one2zero)) + ">"
114
115
116 vcd = VCD()
117
118 def parse_error(tokeniser, keyword):
119 raise "Don't understand keyword: " + keyword
120
121 def drop_declaration(tokeniser, keyword):
122 dropwhile(lambda x: x != "$end", tokeniser).next()
123
124 def save_declaration(tokeniser, keyword):
125 vcd.__setattr__(keyword.lstrip('$'),
126 " ".join( takewhile(lambda x: x != "$end", tokeniser)) )
127 vcd_date = save_declaration
128 vcd_timescale = save_declaration
129 vcd_version = save_declaration
130
131 def vcd_enddefinitions(tokeniser, keyword):
132 vcd.enddefinitions = True
133 drop_declaration(tokeniser, keyword)
134 def vcd_scope(tokeniser, keyword):
135 vcd.scope.append( tuple(takewhile(lambda x: x != "$end", tokeniser)))
136 def vcd_upscope(tokeniser, keyword):
137 vcd.scope.pop()
138 tokeniser.next()
139 def vcd_var(tokeniser, keyword):
140 var_type, size, identifier_code, reference = tuple(takewhile(lambda x: x != "$end", tokeniser))
141 reference = vcd.scope + [('var', reference)]
142 vcd.idcode2references[identifier_code].append( (var_type, size, reference))
143 vcd.reference2idcode[tuple(reference)] = identifier_code
144 vcd.id2stats[identifier_code] = IdStats(size)
145 def vcd_dumpall(tokeniser, keyword): pass
146 def vcd_dumpoff(tokeniser, keyword): pass
147 def vcd_dumpon(tokeniser, keyword): pass
148 def vcd_dumpvars(tokeniser, keyword): pass
149 def vcd_end(tokeniser, keyword):
150 if not vcd.enddefinitions:
151 parse_error(tokeniser, keyword)
152
153
154 keyword2handler = {
155 # declaration_keyword ::=
156 "$comment": drop_declaration,
157 "$date": vcd_date,
158 "$enddefinitions": vcd_enddefinitions,
159 "$scope": vcd_scope,
160 "$timescale": vcd_timescale,
161 "$upscope": vcd_upscope,
162 "$var": vcd_var,
163 "$version": vcd_version,
164 # simulation_keyword ::=
165 "$dumpall": vcd_dumpall,
166 "$dumpoff": vcd_dumpoff,
167 "$dumpon": vcd_dumpon,
168 "$dumpvars": vcd_dumpvars,
169 "$end": vcd_end,
170 }
171 keyword2handler = defaultdict(parse_error, keyword2handler)
172
173 def vcd_toggle_count(vcdfile):
174 f = open(vcdfile)
175 tokeniser = (word for line in f for word in line.split() if word)
176 for count,token in enumerate(tokeniser):
177 if not vcd.enddefinitions:
178 # definition section
179 if token != '$var':
180 print token
181 keyword2handler[token](tokeniser, token)
182 else:
183 if count % 10000 == 0:
184 print count, "\r",
185 c, rest = token[0], token[1:]
186 if c == '$':
187 # skip $dump* tokens and $end tokens in sim section
188 continue
189 elif c == '#':
190 vcd.now = rest
191 elif c in '01xXzZ':
192 vcd.scaler_value_change(value=c, id=rest)
193 elif c in 'bBrR':
194 vcd.vector_value_change(format=c.lower(), number=rest, id=tokeniser.next())
195 else:
196 raise "Don't understand: %s After %i words" % (token, count)
197 print count
198 f.close()
199
200 vcd_toggle_count(vcdfile)
201 header, body = vcd.textstats()
202 print '\n'+header+'\n\n'+body+'\n'

STOP PRESS!
I'm back from work and the program worked with minor changes:
  1. I used the fileinput module to allow greater flexibility in specifying the input VCD file.
  2. Works, simulator had a slightly different interpretation of the spec around the definition of $var. (The spec needs to explicitely mark where spaces can/must occur).
  3. I missed adding commas to separate the output lines which should form a valid Python dict.
With an unchanged core algorithm the program churned through 200Mbytes of VCD file in 3 minutes. 2 gigs in 30 minutes is fine for me.
Newer Posts Older Posts Home

About Me

Followers

Subscribe Now: google

Add to Google Reader or Homepage

Go deh too!

whos.amung.us

Blog Archive