In safety flows, everything comes from the specification - the designer designs to the spec., the verification team verify that the spec. is correctly implemented.
- The spec. is central.
- The spec. is a document written by engineers.
- In modern system-on-chip designs, the spec. is complex.
Structured data
Let's call those that develop specs Concept Engineers. Concept engineers will have their modelling tools, scripts, and spreadsheets, etc that they use to converge on the correct design that theyproceed to write up as the spec. Those tools often create a wealth of structured data such as:
- Register maps
- Register field descriptions
- Memory maps
- Top level bus maps
- ...
When writing the spec: Parts can be automated by using these concept pre-spec data sources to generate aspects.
When implementing and verifying the design: Parts can be automated by using these data sources.
What is often left out is: ISO26262 mandates that all data come from the spec.
What is the spec?
I assume that the specification must be print friendly and understandable. You need to convince auditors that what is delivered is an expression of the spec. I am assuming that expressing all of the concept-stage structured data in textual form and appending reams of xml as an appendix is unacceptable. You need register tables, state machine diagrams, truth tables, ...We have standard ways of expressing our technical concepts that are expected to be used. In the world of safety, they need to be built upon.
Round tripping
If an item is auto generated from structured date for use by the design verification team, then:- The spec. should contain that data.
- Create a script that can scrape the end spec. format (usually a PDF), and regenrate the structured data - identical enough so that a simple textual diff will show they are the same.
Scraping aids:
(I.e. aids to scrape data from a specification. Like web-scraping does for HTML)- I usually take the spec in PDF as a format for reading. Luckily, if you check with your PDF generation tools, there are PDF readers that can convert PDF's to spreadsheets, In my current case, PDF text lines appear as spreadsheet rows with the text line all in the leftmost column. PDF table columns appear in multiple cells of rows, and each PDF page is a separate sheet of the workbook.
Python, and other languages have libraries that can read spreadsheets. - The "original" structured data generated from concept engineering tools should be "pretty printed" and ordered before use in downstraem flows to allow easier textual comparison by diff'ing or other simple means.
- When generating sections of text for a spec from structured data pre-tag that section.
Pre tagging means adding a recognised word or sequence of words immediately before the generated spec data that denotes the format of that data chunk. For example, it could be a new line starting REGISTER:: that must allways start a register definition with its fields arranged, in-order, inside a 31-to-16, then 15-to-0 annottated horizontal table; then a table with headers of maybe "field, bitrange, type, comment"; defined text specifying register features; ...
That pre-tag format should be used throughout the document and should not detract from how it reads. I use an example of a word followed by double colons above; a hash, '#' followed by a word, (a hashtag), would work too, but choose a format and stick to it.
Making the tag immediately precede what is tagged and having data fields with the same tag be expressed in the same format aids scraping enormousely. (And reading too). - Show the structure in the items: If a pre-tagged item, such as a register name has a range of values then show a parameterised name and use a named index then show how the index links to properties of the register (in this case), that are designed to vary with the index, e.g. register offset, any of the registers bitfields, reset values, ...
Don't expand the index in the spec. as valuable information may be lost. It may seem easier if the index has only two values, to add two sepearate "expanded" entries, but then their inter-relationships and the very fact that they are related, must then be inferred rather than being given.
Different , separate, parameterised items may then share the same named index and index range to show extra information
Scraping benefits
- Data duplication in specs: Explaining a topic might need the mention of a tagged item before items of that type are all shown, for example specific registers before the section where all registers are shown as part of the register map. When the spec is scraped, the scraper can make sure multiple definition are equivalent.
- Scraped data can be used to regenerate the concept data used in design and verification flows before the spec was finalised to ensure the spec is correct. (Or those tools rerun on the specs scraped data).
- By thinking of scraping needs, you are forced to think about finding the patterns in the data, and ensuring th spec is complete.
No comments:
Post a Comment