Go deh!

Mainly Tech projects on Python and Electronic Design Automation.

Sunday, October 13, 2019

ISO26262:: Some thoughts on Specs.

It seems as if there are many engineers who can do, but who struggle with doing the documentation.
In safety flows, everything comes from the specification - the designer designs to the spec., the verification team verify that the spec. is correctly implemented.
  • The spec. is central.
  • The spec. is a document written by engineers.
  • In modern system-on-chip designs, the spec. is complex.
There's an aphorism, (yanked from the zen of Python),  that states "complex is better than complicated" which I take to mean that you need to look deeper for the structure in multi-facetted, intricate systems. How you find and present/document that structure can make a wealth of difference to how well, and how quickly the system is understood.

Structured data

Let's call those that develop  specs Concept Engineers. Concept engineers will have their modelling tools, scripts, and spreadsheets, etc that they use to converge on the correct design that they
proceed to write up as the spec. Those tools often create a wealth of structured data such as:
  • Register maps
  • Register field descriptions
  • Memory maps
  • Top level bus maps
  • ...
The Concept engineers will/can produce structured data of many formats, such as XML, JSON, SQL DB's, CSV files, REST format web DB's.

When writing the spec: Parts can be automated by using these concept pre-spec data sources to generate aspects.

When implementing and verifying the design: Parts can be automated by using these data sources.

What is often left out is: ISO26262 mandates that all data come from the spec.

What is the spec?

I assume that the specification must be print friendly and understandable. You need to convince auditors that what is delivered is an expression of the spec. I am assuming that expressing all of the concept-stage structured data in textual form and appending reams of xml as an appendix is unacceptable. You need register tables, state machine diagrams, truth tables, ...
We have standard ways of expressing our technical concepts that are expected to be used.  In the world of safety, they need to be built upon.

Round tripping

If an item is auto generated from structured date for use by the design verification team, then:
  1. The spec. should contain that data.
  2. Create a script that can scrape the end spec. format (usually a PDF), and regenrate the structured data - identical enough so that a simple textual diff will show they are the same.

Scraping aids:

(I.e. aids to scrape data from a specification. Like web-scraping does for HTML)
  • I usually take the spec in PDF as a format for reading. Luckily, if you check with your PDF generation tools, there are PDF readers that can convert PDF's to spreadsheets, In my current case, PDF text lines appear as spreadsheet rows with the text line all in the leftmost column. PDF table columns appear in multiple cells of rows, and each PDF page is a separate sheet of the workbook.
    Python, and other languages have libraries that can read spreadsheets.
  • The "original" structured data generated from concept engineering tools  should be "pretty printed" and ordered  before use in downstraem flows to allow easier textual comparison by diff'ing or other simple means.
  • When generating sections of text for a spec from structured data pre-tag that section.
    Pre tagging means adding a recognised word or sequence of words immediately before the generated spec data that denotes the format of that data chunk. For example, it could be a new line starting REGISTER:: that must allways start a register definition with its fields arranged, in-order, inside a 31-to-16, then 15-to-0 annottated horizontal table; then a table with headers of maybe "field, bitrange, type, comment"; defined text specifying register features; ...
    That pre-tag format should be used throughout the document and should not detract from how it reads. I use an example of a word followed by double colons above; a hash, '#' followed by a word, (a hashtag), would work too, but choose a format and stick to it.
    Making the tag immediately precede what is tagged and having data fields with the same tag be expressed in the same format aids scraping enormousely. (And reading too).
  • Show the structure in the items: If a pre-tagged item, such as a register name has a range of values then show a parameterised name and use a named index then show how the index links to properties of the register (in this case), that are designed to vary with the index, e.g. register offset, any of the registers bitfields, reset values, ...
    Don't expand the index in the spec. as valuable information may be lost. It may seem easier if the index has only two values, to add two sepearate "expanded" entries, but then their inter-relationships and the very fact that they are related, must then be inferred rather than being given.
    Different , separate, parameterised items may then share the same named index and index range to show extra information

Scraping benefits

  • Data duplication in specs: Explaining a topic might need the mention of a tagged item before items of that type are all shown, for example specific registers before the section where all registers are shown as part of the register map. When the spec is scraped, the scraper can make sure multiple definition are equivalent.
  • Scraped data can be used to regenerate the concept data used in design and verification flows before the spec was finalised to ensure the spec is correct. (Or those tools rerun on the specs scraped data).
  • By thinking of scraping needs, you are forced to think about finding the patterns in the data, and ensuring th spec is complete.

Diagrams

They are too much bother to try and scrape I find. I'd cut down on expressing Concept data used to generate verification and design parts as only diagrams in the spec. Most diagrams do have textul countrparts such as netlists. Creative layout of textual , parsable "code", may suffice or be used in addition to the diagram that ahows the same - scraping might then just point out an area for manual checking.

When you just can't get away from that long list

Lets say you have have 1024 identical register in all but name and "power domain". You could use a parameterised name for the register type, an alias for each indexed instance of that register type. The data format for registers needs to expand to have a field pointing to a named, pre-tagged  data table that would map each index to its alias and power domain. (Of course, if the table is never very wide, then it could be doubled-up on the page to save space and make the spec. more presentable - someone will read it.

END.

(There are other aspects of spec writing I need to write about in further posts.).

Tuesday, August 27, 2019

N-Dimensional matrix to 1-D array indexing translations.


Having done the 2-D address indexing translations, I thught about how to translate between a set of 3-D indices and a linear 1-D array index then extrapolated to n-dimensions.

I liked the idea of testing the solution and have brought that across too (with additions)

The class

1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42

# -*- coding: utf-8 -*-
"""
Created on Tue Aug 27 01:49:51 2019

@author: Paddy3118
"""

from collections import OrderedDict
from itertools import product
from functools import reduce

#%%

class ND21D_Addressing():
    """
    Convert n-dimensional indexing to/from 1-D index as if packed  
    into 1-D array. 
    All indices assumed to start from zero
    """
    def __init__(self, *extent):
        "extent is tuple of index sizes in each dimension"
        n_dim = len(extent) # Dimensionality
        self._extent = extent
        self._offsets = [reduce(int.__mul__, 
                               extent[n + 1:], 1) 
                        for n in range(n_dim)]

    # What n-dimensional index-tuple is stored at linear index.
    def i2ndim(self, index_i):
        "1-D array index to to n-D tuple of indices"
        return tuple((index_i // s) % c 
                     for s, c in zip(self._offsets, self._extent))
    
    # What linear 1-D index stores n-D tuple of indices.
    def ndim2i(self, ni):
        "n-D tuple of indices to 1-D array index"
        return sum(d * s for s, d in zip(self._offsets, ni))
    
    def __repr__(self):
        return f"{self.__class__.__name__}({str(self._extent)[1:-1]})"
#%%
    


The test

43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
def _irange(mini, maxi):
    "Integer range mini-to-maxi inclusive of _both_ endpoints"
    # Some think ranges should include _both_ endpoints, oh well.
    return range(mini, maxi+1)

def _print_n_dim(ranges_from_zero):
    "Represent the indexing of an n-D matrix"
    last = [0] * len(ranges_from_zero)
    for ni in product(*ranges_from_zero):
        for s, t in zip(last, ni):
            if s != t and t == 0: print()
        last = ni
        print(str(ni).replace(' ', ''), end=' ')
    print()

#%%
if __name__ == "__main__":
    # Dimensionality for test
    n_dim = 4
    
    # range of values in each dimension.
    dranges = [_irange(0, d+1) for d in range(n_dim)]
    # Num of values in each dim.
    extent = [len(dr) for dr in dranges]  
    
    ## The address mapper instance
    admap = ND21D_Addressing(*extent)
    
    ## A test matrix of given dimensionality
    # Optimum size of mapping to 1-dim. array
    size_1d = reduce(int.__mul__, extent)  
    # Range of all mapped to 1-dim. array index values
    range_1d = _irange(0, size_1d - 1)  

    print(f"\n## ORIGINAL {n_dim}-D ARRAY:")
    _print_n_dim(dranges)

    print(f"\n# TEST TRIAL MAP {n_dim}-D TO/FROM 1-D ARRAY ADDRESSING")
          
    # Representing a 1-D array mapped to n-D index tuple 
    dim_1 = OrderedDict((index_i, admap.i2ndim(index_i)) 
                        for index_i in range_1d)
    all_ndim = set(dim_1.values())
    all_by_dim  = [set(d1) for d1 in zip(*all_ndim)]
    assert len(all_ndim) == size_1d, "FAIL! ndim index count"
    for a_dim, its_count in zip(all_by_dim, extent):
        assert len(set(a_dim)) == its_count, \
               "FAIL! ndim individual index count"
               
    # Representing n-D index tuple mapped to 1-D index
    dim_n = OrderedDict(((ndim), admap.ndim2i(ndim))
                        for ndim in product(*dranges))
    all_i = set(dim_n.values())
    assert min(all_i) == 0, "FAIL! Min index_i not zero"
    assert max(all_i) == size_1d - 1, \
           f"FAIL! Max index_i not {size_1d - 1}"

    # Check inverse mappings
    assert all(dim_1[dim_n[ndim]] == ndim 
               for ndim in dim_n), \
           "FAIL! Mapping n-D to/from 1-D indices"
    assert all(dim_n[dim_1[index_i]] == index_i 
               for index_i in range_1d), \
           "FAIL! Mapping 1-D to/from n-D indices"

    print(f"  {admap}: PASS!")



The test output


## ORIGINAL 4-D ARRAY:
(0,0,0,0) (0,0,0,1) (0,0,0,2) (0,0,0,3) (0,0,0,4) 
(0,0,1,0) (0,0,1,1) (0,0,1,2) (0,0,1,3) (0,0,1,4) 
(0,0,2,0) (0,0,2,1) (0,0,2,2) (0,0,2,3) (0,0,2,4) 
(0,0,3,0) (0,0,3,1) (0,0,3,2) (0,0,3,3) (0,0,3,4) 

(0,1,0,0) (0,1,0,1) (0,1,0,2) (0,1,0,3) (0,1,0,4) 
(0,1,1,0) (0,1,1,1) (0,1,1,2) (0,1,1,3) (0,1,1,4) 
(0,1,2,0) (0,1,2,1) (0,1,2,2) (0,1,2,3) (0,1,2,4) 
(0,1,3,0) (0,1,3,1) (0,1,3,2) (0,1,3,3) (0,1,3,4) 

(0,2,0,0) (0,2,0,1) (0,2,0,2) (0,2,0,3) (0,2,0,4) 
(0,2,1,0) (0,2,1,1) (0,2,1,2) (0,2,1,3) (0,2,1,4) 
(0,2,2,0) (0,2,2,1) (0,2,2,2) (0,2,2,3) (0,2,2,4) 
(0,2,3,0) (0,2,3,1) (0,2,3,2) (0,2,3,3) (0,2,3,4) 


(1,0,0,0) (1,0,0,1) (1,0,0,2) (1,0,0,3) (1,0,0,4) 
(1,0,1,0) (1,0,1,1) (1,0,1,2) (1,0,1,3) (1,0,1,4) 
(1,0,2,0) (1,0,2,1) (1,0,2,2) (1,0,2,3) (1,0,2,4) 
(1,0,3,0) (1,0,3,1) (1,0,3,2) (1,0,3,3) (1,0,3,4) 

(1,1,0,0) (1,1,0,1) (1,1,0,2) (1,1,0,3) (1,1,0,4) 
(1,1,1,0) (1,1,1,1) (1,1,1,2) (1,1,1,3) (1,1,1,4) 
(1,1,2,0) (1,1,2,1) (1,1,2,2) (1,1,2,3) (1,1,2,4) 
(1,1,3,0) (1,1,3,1) (1,1,3,2) (1,1,3,3) (1,1,3,4) 

(1,2,0,0) (1,2,0,1) (1,2,0,2) (1,2,0,3) (1,2,0,4) 
(1,2,1,0) (1,2,1,1) (1,2,1,2) (1,2,1,3) (1,2,1,4) 
(1,2,2,0) (1,2,2,1) (1,2,2,2) (1,2,2,3) (1,2,2,4) 
(1,2,3,0) (1,2,3,1) (1,2,3,2) (1,2,3,3) (1,2,3,4) 

# TEST TRIAL MAP 4-D TO/FROM 1-D ARRAY ADDRESSING
  ND21D_Addressing(2, 3, 4, 5): PASS!


END.

Monday, August 26, 2019

2-Dimension matrix to 1-D array, index translations.

Work has me working with hardware registers. Many registers; arrayed registers; multi-arrayed registers!.

The verifiction library I use has code to handle 1-D arrays of registers but not for 2-d (matrix) of registers - which is the problem I have today.

Problem Statement

A useful description of the problem is:
Given a 2-D matrix of values to store and access in a system that allows the storingof 1-D arrays of values, how do you map from the 2-D x,y indices to the 1-D i index - and vice-versa?

Partial memory

It's several decades since I first looked into this but I do remember divmod! Divmod was a part of the solution: divmod(x, y) returns (x // y, x % y) i.e the integer divisor and the integer remainder of x and y, as a tuple.

Rather than "do the math" to work out the correct functions needed to generate a 1-D array index i, from two matrix indices x and y - as well as the reverse function - I decided to take a suck-it-and-see approach. I had ideas that contain the correct solution and devised tests to reject faulty implementations.

Setup

  1. Indices in any dimension count up from zero.
  2. Use a different maximum index in each matrix dimension to aid later checks
  3. I created function irange (line 14+), as sometimes people like to generate integer ranges that include both endpoints.

1-D to 2-D

Variable i_to_xy_list on lines 35+, has the source for four functions alternatives that when given a 1-D index generate a tuple of index numbers representing x and y. All four use divmod.

To test them I create a python function from the text using eval in line 50 then, knowing that if the matrix has three possible x values and four possible y values and so indexes exactly 3*4 = 24 values, I use a range of 1-D index of 0-to-23 inclsive to hold the corresponding x,y tuples generated, in the (oredered) dict dim_1 in line 52.
Sets all_xy, all_x, and all_y (lines54-56), accumulate the different indexing number-pairs and numbers seen in all/each dimension of the matrix indexing generated from this function i2xy. They are then tested to ensure they have the expected number and range of individual indeces
 in lines 57-68.

2-D to 1-D

Similarly xy_to_i_list has four possible ways that could match-up with a passing 1-D to 2-D fnction to do the reverse conversion from x,y coords to linear array index i.
all permutations of the range of x and y index values are used to generate sample 1-D indeces in dict dim_n (line 73), then the generated 1-D indices are checked (line 76+)

The last check, from line 86, checks that the i2xy function and xy2i functions are inverses of each other, generating and decoding the same indeces.

The output

## ORIGINAL 2-D ARRAY:
0,0 0,1 0,2 0,3
1,0 1,1 1,2 1,3
2,0 2,1 2,2 2,3

  # TRIAL MAPPINGS TO 1-D ARRAY
  FAIL! x count from `x, y = (lambda index_i: divmod(index_i, xcount))(index_i)`
  FAIL! Max index_i not 11 in `index_i = (lambda x, y: x * xcount + y)(x, y)`
  PASS! `x, y = (lambda index_i: divmod(index_i, ycount))(index_i); index_i = (lambda x, y: x * ycount + y)(x, y)`
  FAIL! Max index_i not 11 in `index_i = (lambda x, y: x + y * ycount)(x, y)`
  FAIL! Mapping index_i to/from x, y using `x, y = (lambda index_i: divmod(index_i, ycount))(index_i); index_i = (lambda x, y: x + y * xcount)(x, y)`
  FAIL! Max index_i not 11 in `index_i = (lambda x, y: x * xcount + y)(x, y)`
  FAIL! Mapping index_i to/from x, y using `x, y = (lambda index_i: divmod(index_i, xcount)[::-1])(index_i); index_i = (lambda x, y: x * ycount + y)(x, y)`
  FAIL! Max index_i not 11 in `index_i = (lambda x, y: x + y * ycount)(x, y)`
  PASS! `x, y = (lambda index_i: divmod(index_i, xcount)[::-1])(index_i); index_i = (lambda x, y: x + y * xcount)(x, y)`
  FAIL! x count from `x, y = (lambda index_i: divmod(index_i, ycount)[::-1])(index_i)`

# SUMMARY
  PASS! `x, y = (lambda index_i: divmod(index_i, ycount))(index_i); index_i = (lambda x, y: x * ycount + y)(x, y)`
  PASS! `x, y = (lambda index_i: divmod(index_i, xcount)[::-1])(index_i); index_i = (lambda x, y: x + y * xcount)(x, y)`

Or to rewrite the lambdas as functions:
# These two:
def i2xy(i):
    return divmod(i, ycount)
def xy2i(x, y):
    return x * ycount + y
# Or these two:
def i2xy(i):
    return reversed(divmod(i, xcount))
def xy2i(x, y):
    return x + y * xcount



Code:

1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
# -*- coding: utf-8 -*-
"""
Created on Sat Aug 24 21:31:03 2019

@author: Paddy3118
"""

from collections import OrderedDict
from itertools import product
from pprint import pprint as pp

#%%

def irange(mini, maxi):
    "Integer range mini-to-maxi inclusive of _both_ endpoints"
    # Some think ranges should include _both_ endpoints, oh well.
    return range(mini, maxi+1)

#%%
xrange = irange(0, 2)
yrange = irange(0, 3)
xcount = len(xrange) # 3
ycount = len(yrange) # 4

print("\n## ORIGINAL 2-D ARRAY:")
for x in xrange:
    print(' '.join(f"{x},{y}" for y in yrange))


#%%
print("\n  # TRIAL MAPPINGS TO 1-D ARRAY")
print_on_fail, print_on_pass = False, False

# Possible ways to get what n-dimensional index-tuple is stored at linear index
i_to_xy_list = """
lambda index_i: divmod(index_i, xcount)
lambda index_i: divmod(index_i, ycount)
lambda index_i: divmod(index_i, xcount)[::-1]
lambda index_i: divmod(index_i, ycount)[::-1]
""".strip().split('\n')
# Possible ways to generate a linear 1-D index from n-D tuple of indices
xy_to_i_list = """
lambda x, y: x * xcount + y
lambda x, y: x * ycount + y
lambda x, y: x + y * ycount
lambda x, y: x + y * xcount
""".strip().split('\n')
passes = []
for i_to_xy in i_to_xy_list:
    i2xy = eval(i_to_xy)
    # Representing a 1-D array as OrderedDict preserves insertion order
    dim_1 = OrderedDict((index_i, i2xy(index_i)) 
                        for index_i in irange(0, xcount*ycount - 1))
    all_xy = set(dim_1.values())
    all_x  = set(x for x, y in dim_1.values())
    all_y  = set(y for x, y in dim_1.values())
    if len(all_xy) != xcount * ycount:
        print(f"  FAIL! x,y count from `x, y = ({i_to_xy})(index_i)`")
        if print_on_fail: pp(dim_1)
        continue
    if len(all_x) != xcount:
        print(f"  FAIL! x count from `x, y = ({i_to_xy})(index_i)`")
        if print_on_fail: pp(dim_1)
        continue
    if len(all_y) != ycount:
        print(f"  FAIL! y count from `x, y = ({i_to_xy})(index_i)`")
        if print_on_fail: pp(dim_1)
        continue
    #
    for xy_to_i in xy_to_i_list:
        xy2i = eval(xy_to_i)
        # Representing a N-D array
        dim_n = OrderedDict(((x, y), xy2i(x, y))
                            for x, y in product(xrange, yrange))
        all_i = set(dim_n.values())
        if min(all_i) != 0:
            print(f"  FAIL! Min index_i not zero in "
                  f"`index_i = ({xy_to_i})(x, y)`")
            if print_on_fail: pp(dim_n)
            continue
        if max(all_i) != xcount * ycount - 1:
            print(f"  FAIL! Max index_i not {xcount * ycount - 1} in "
                  f"`index_i = ({xy_to_i})(x, y)`")
            if print_on_fail: pp(dim_n)
            continue
        if not all(dim_1[dim_n[xy]] == xy
                   for xy in dim_n):
            print(f"  FAIL! Mapping index_i to/from x, y using "
                  f"`x, y = ({i_to_xy})(index_i); index_i = ({xy_to_i})(x, y)`")
            if print_on_fail: pp(dim_1)
            if print_on_fail: pp(dim_n)
            continue
        passes.append((i_to_xy, xy_to_i))
        print(f"  PASS! `x, y = ({i_to_xy})(index_i); index_i = ({xy_to_i})(x, y)`")
        if print_on_pass:
            pp(dim_1)
            pp(dim_n)

print('\n# SUMMARY')
for i_to_xy, xy_to_i in passes:
    print(f"  PASS! `x, y = ({i_to_xy})(index_i); index_i = ({xy_to_i})(x, y)`")
#

Sunday, May 26, 2019

Nested attribute lookup in dicts

As part of a larger project, I wanted to explore the use of Python object  attribute access syntax to access items of a dict.

A base.

I looked around and found this article that gave me a base I could riff off. I want the attributes accessed to appear in the dictionary. I want nested attribut access to "work".

A new implementation.

I came up with the following code. any attributes starting with an underscore are excused from jiggery-pokery as Spyder/Ipython sets a few.

class AttrInDict(dict):
    "Move none-hidden attribute access to dict item"
 
    def __init__(self, *args, **kwargs):
        self._extra_attr = set()
        super().__init__(*args, **kwargs)
 
    def __getattr__(self, item):
        if item[0] != '_':
            if (not super().__contains__(item)  # Its new
                or (item in self._extra_attr    # It's an attr, now None
                    and super().__getitem__(item) is None)):
                super().__setitem__(item, AttrInDict())
            return super().__getitem__(item)
        else:
            return super().__getattr__(item)
 
    def __setattr__(self, item, val):
        if item[0] != '_':
            super().__setitem__(item, val)
            self._extra_attr.add(item)
        else:
            super().__setattr__(item, val)
 
    def __dir__(self):
        "To get tooltips working"
        supr = set(super().__dir__())
        return list(supr | self._extra_attr)


Class in action.

Python 3.7.1 | packaged by conda-forge | (default, Mar 13 2019, 13:32:59) [MSC v.1900 64 bit (AMD64)]
Type "copyright", "credits" or "license" for more information.

IPython 7.1.1 -- An enhanced Interactive Python.

Restarting kernel...



In [1]: runfile('dictindict.py', wdir='pp_reg')

In [2]: # Start like a dict

In [3]: d = AttrInDict(x=3)

In [4]: d
Out[4]: {'x': 3}

In [5]: # Access like an attribute

In [6]: d.x
Out[6]: 3

In [7]: # Access an unknown attribute creates a sub-"dict"

In [8]: d.foo
Out[8]: {}

In [9]: d
Out[9]: {'x': 3, 'foo': {}}

In [10]: d.foo = 123

In [11]: d.foo
Out[11]: 123

In [12]: d
Out[12]: {'x': 3, 'foo': 123}

In [13]: # Access an unknown, unknown attribute creates sub, sub "dicts"

In [14]: d.tick.tock
Out[14]: {}

In [15]: d
Out[15]: {'x': 3, 'foo': 123, 'tick': {'tock': {}}}

In [16]: # Dict hierarchy preserved

In [17]: d.tick.tack = 22

In [18]: d
Out[18]: {'x': 3, 'foo': 123, 'tick': {'tock': {}, 'tack': 22}}

In [19]: d.tick.teck = 33

In [20]: d
Out[20]: {'x': 3, 'foo': 123, 'tick': {'tock': {}, 'tack': 22, 'teck': 33}}

In [21]: d.tick.tock.tuck = 'Boo'

In [22]: d
Out[22]: {'x': 3, 'foo': 123, 'tick': {'tock': {'tuck': 'Boo'}, 'tack': 22, 'teck': 33}}

In [23]: # Can tack on hierarchy to previous attribute with None value

In [24]: d.foo = None

In [25]: d
Out[25]:
{'x': 3,
'foo': None,
'tick': {'tock': {'tuck': 'Boo'}, 'tack': 22, 'teck': 33}}

In [26]: d.foo.bar = 42

In [27]: d
Out[27]:
{'x': 3,
'foo': {'bar': 42},
'tick': {'tock': {'tuck': 'Boo'}, 'tack': 22, 'teck': 33}}

In [28]: # Still like a dict

In [29]: d.keys()
Out[29]: dict_keys(['x', 'foo', 'tick'])

In [30]: d.values()
Out[30]: dict_values([3, {'bar': 42}, {'tock': {'tuck': 'Boo'}, 'tack': 22, 'teck': 33}])

In [31]:

END.

Friday, October 12, 2018

Three guys on math



Coder:

    (In Jeans and T-shirt, next to a cup of coffee) I look down on him (Indicates Excel'r) because I write proper programs.

Excel'r:

    (Trousers, shirt, no tie) I look up to him (Coder) because he writes proper programs; but I look down on him (Hand-Calculater) because he has no graphics. I have a GUI

Hand-Calculater:

    (Student) I know my place. I look up to them both. But I don't look up to him (Excel'r) as much as I look up to him (Coder), because he writes apps and games.

Coder:

    I do write apps and games, but I have a higher barrier to entry. So sometimes I look up (bends knees, does so) to him (Excel'r).

Excel'r:

    I still look up to him (Coder) because although I have easy access, I am vulgar. But I am not as vulgar as him (Hand-Calculater) so I still look down on him (Hand-Calculater).

Hand-Calculater:

    I know my place. I look up to them both; but while I am poor, I am honest, industrious and trustworthy. Had I the inclination, I could look down on them. But I don't.

Excel'r:

    We all know our place, but what do we get out of it?

Coder:

    I get a feeling of superiority over them.

Excel'r:

    I get a feeling of inferiority from him, (Coder), but a feeling of superiority over him (Hand-Calculater).

Hand-Calculater:

    I get RSI.


Original Sketch:


Followers

Subscribe Now: google

Add to Google Reader or Homepage

Go deh too!

whos.amung.us

Blog Archive