Showing posts with label python. Show all posts
Showing posts with label python. Show all posts

Thursday, March 29, 2012

Leo 4.10 final released

Leo 4.10 final is now available here

Leo is a text editor, data organizer, project manager and much more.

Leo 4.10 contains 9 months of intense work on Leo. Several very important features are subtle; you could almost call them Easter Eggs, so please read the following notes carefully.

The highlights of Leo 4.10:

* Dozens of new and improved features and commands, including...
- Tab completion now shows all @command & @button nodes.
- Leo tabs may be detached from the main window.
- The Open With menu now works.
- The leoInspect module answers questions about Python code.
- Leo can highlight the pane containing the focus.
- The bigdash plugin searches across multiple files.
- Improved abbreviation capabilities.
- Improved handling of URL's.
- Improved editing of non-Leo files.
- Improvements create "weightless" unit testing.
- Improved Leo's home page.
* Easier installation on MacOS.
* Fixed almost 70 bugs.

The Easter Eggs

1. Tab completion now shows all @command & @button nodes.

Put all your common scripts in @command nodes in myLeoSettings.leo. Typing <Alt-X>@c <Tab> will remind you of the names of these scripts. You can execute the scripts by name without the "@command-" prefix.

2. Improved abbreviation capabilities.

Virtually any kind of abbreviation is possible. For example, ~a to ã.

3. Improved handling of URL's.

URL's can be used as links to other Leo outlines.

4 Weightless/waitless unit testing.

The mantra is edit, alt-4 (run-marked-unit-tests-externally), edit, alt-4,... Several seemingly innocuous changes made this work without "friction". The result is a remarkable increase in productivity.

Links

Leo

Forum

Download

Thursday, March 22, 2012

Leo 4.10 b1 released

Leo 4.10 b1 March 21, 2012

Leo 4.10 b1 is now available at: http://sourceforge.net/projects/leo/files/

Leo is a text editor, data organizer, project manager and much more.
http://webpages.charter.net/edreamleo/intro.html

Leo 4.10 contains 9 months of intense work on Leo. Several very important
features are subtle; you could almost call them Easter Eggs, so please read
the following notes carefully.

The highlights of Leo 4.10:
---------------------------

* Dozens of new and improved features and commands, including...
- Tab completion now shows all @command & @button nodes.
- Leo tabs may be detached from the main window.
- The Open With menu now works.
- The leoInspect module answers questions about Python code.
- Leo can highlight the pane containing the focus.
- The bigdash plugin searches across multiple files.
- Improved abbreviation capabilities.
- Improved handling of URL's.
- Improved editing of non-Leo files.
- Improvements create "weightless" unit testing.
* Easier installation on MacOS.
* Fixed almost 70 bugs.

The Easter Eggs
---------------

1. Tab completion now shows all @command & @button nodes.

Put all your common scripts in @command nodes in myLeoSettings.leo.
Typing @c will remind you of the names of these scripts.
You can execute the scripts by name without the "@command-" prefix.

2. Improved abbreviation capabilities.

Virtually any kind of abbreviation is possible. For example, ~a to ã.

3. Improved handling of URL's.

URL's can link to other Leo outlines. Ctrl-Click on nodes or URL's
in body text to activate the URL.

4 Weightless unit testing.

The mantra is edit, alt-4 (run-marked-unit-tests-externally), edit,
alt-4,... Several seemingly innocuous changes made this work without an
"friction". The result is a remarkable increase in productivity.

Links:
------
Leo: http://webpages.charter.net/edreamleo/front.html
Forum: http://groups.google.com/group/leo-editor
Download: http://sourceforge.net/projects/leo/files/
Bzr: http://code.launchpad.net/leo-editor/
Quotes: http://webpages.charter.net/edreamleo/testimonials.html

Tuesday, June 21, 2011

Leo 4.9 final is now available here.

Leo is a text editor, data organizer, project manager and much more.

The highlights of Leo 4.9

- Leo uses the Qt gui everywhere, including plugins.
- Completed Leo's autocompleter.
- The rendering pane displays movies, html, svg images, etc.
- Nodes may contain multiple @language directives.
- Leo highlights URL's everywhere. Ctrl-click opens them in your web browser.
- Leo uses an @file node's extension to compute the default @language.
- Unified extract and import commands.
- New commands to manage uA's (user attributes).
- Added xml namespaces to .leo files.
- Fixed many bugs, some important, most minor.

Links

Leo
Forum
Download
Bzr
Quotes

Tuesday, June 14, 2011

Leo 4.9 b4 released

Leo 4.9 b4 is now available here. Leo is a text editor, data organizer, project manager and much more.

For more details, see this announcement at the leo-editor google group.

There are no remaining major items on Leo's to-do list, and no known bugs in Leo. Unless serious problems are reported, expect Leo 4.9 rc1 this Friday, June 17 and 4.9 final on Tuesday, June 21.

Edward

Saturday, June 4, 2011

Leo 4.9 b2 released

Leo 4.9 beta 2 is now available
here.

Leo is a text editor, data organizer, project manager and much more.

The highlights of Leo 4.9

- Leo no longer supports the Tk gui:
the Qt gui now does everything Tk did.
- Many fit-and-finish bugs fixed.
- Completed Leo's autocompleter.
- A new rendering pane displays movies,
html, svg images, etc.
- The scrolledmessage plugin uses the rendering pane.
- Nodes may contain multiple @language directives.
- Leo highlights URL's everywhere. Ctrl-clicking
a URL opens it in your web browser.
- Leo uses an @file node's extension by default if
there is no @language directive in effect.
- Unified extract and import commands.
- Multiple @language directives per node.
- Plain up/down arrow keys in headline-editing
mode select a new node.
- New commands to manage uA's.
- Added namespaces to .leo files.

Links

Leo
Forum
Download
Quotes

Monday, November 15, 2010

Announcing Leo 4.8 release candidate 1

Leo 4.8 rc1 is now available here.

Leo is a text editor, data organizer, project manager and much more. Look here for more information.

The highlights of Leo 4.8

Leo now uses the simplest possible sentinel lines in external files. External files with sentinels now look like Emacs org-mode files.

Leo Qt gui now supports Drag and Drop. This was one of the most frequently requested features.

Improved abbreviation commands. You now define abbreviations in Leo settings nodes, not external files.

@url nodes may contain url's in body text. This allows headlines to contain summaries: very useful.

Leo now uses PyEnchant to check spelling.

Leo can now open multiple files from the command line.

Leo's ancient Tangle and Untangle commands are now deprecated. This will help newbies how to learn Leo.

Leo now shows "Resurrected" and "Recovered" nodes. These protect data and show how data have changed. These fix several long-standing data-related problems.

A new "screenshots" plugin for creating slide shows with Leo. I used this plugin to create Leo's introductory slide shows.

A better installer.

Many bug fixes.

Links

Leo
Forum
Download
Quotes

Friday, November 5, 2010

Announcing Leo 4.8 beta 1

Leo 4.8 beta 1 is now available here.

Leo is a text editor, data organizer, project manager and much more. Look here for more information.

The highlights of Leo 4.8

Leo now uses the simplest possible sentinel lines in external files. External files with sentinels now look like Emacs org-mode files.

Leo Qt gui now supports Drag and Drop. This was one of the most frequently requested features.

Improved abbreviation commands. You now define abbreviations in Leo settings nodes, not external files.

@url nodes may contain url's in body text. This allows headlines to contain summaries: very useful.

Leo now uses PyEnchant to check spelling.

Leo can now open multiple files from the command line.

Leo's ancient Tangle and Untangle commands are now deprecated. This will help newbies how to learn Leo.

Leo now shows "Resurrected" and "Recovered" nodes. These protect data and show how data have changed. These fix several long-standing data-related problems.

A new "screenshots" plugin for creating slide shows with Leo. I used this plugin to create Leo's introductory slide shows.

A better installer.

Many bug fixes.

Links

Leo
Forum
Download
Quotes

Sunday, September 5, 2010

SQLite is serious about testing!

The How SQLite is Tested page is the most interesting discussion of software testing I have ever seen.

Saturday, August 14, 2010

Sharing code in Leo scripts, part deux

For years I have wanted Leo scripts to be able to share code directly. Now they can--simply, intuitively, dynamically, in a Leonine way.

exec(g.findTestScript(c,h)) is a big breakthrough in Leo scripting; the previous post buried the lead.

To recap, suppose a set of related @test nodes (or any other set of Leo scripts) want to share class definitions in a node whose headline is 'x'. To get these definitions, each node just starts with::

exec(g.findTestScript(c,'x'))

After this one line, the script can use all the class names defined in x without qualification. Furthermore, if I change the definitions in x, these changes immediately become available to all the scripts that use them.

This one-liner is a big step forward in Leonine programming.

Friday, August 13, 2010

Adding code to scripts, the Leo way

All of Leo's unit tests reside in @test nodes in a single Leo outline. Leo's users will understand the benefits of this approach: it is easy to organize tests, and run them in custom batches. For example, I can run all failing unit tests by creating a node called 'failing tests', and then drag clones of the failing @test nodes so they are children of the 'failing tests' node. I then select that node and hit Alt-4, Leo's run-unit-tests-locally command. This executes all the unit tests in that node only.

Unit tests can often be simplified by sharing common code. Suppose, for example, that I want my unit tests to have access to this class::

class Hello:
        def __init__(self,name='john'):
            self.name=name
            print('hello %s' % name)

Before yesterday's Aha, I would have defined the class Hello in an external file, and then imported the file. For example, a complete unit test (in an @test node) might be::

import leo.core.leoTest as leoTest
    h = leoTest.Hello('Bob')
    assert h.name == 'Bob'

Aside: Leo's users will know that putting this code in an @test node makes it an official unit test. Leo automatically creates a subclass of UnitTest.TestCase from the body text of any @test node.

Importing code this way works, but it's a static, plodding solution. To change class Hello, I have to switch to another file, make the changes and save that file, and reload the outline that uses it. I've been wanting a better solution for years. Yesterday I saw the answer: it's completely dynamic, it's totally simple and it's completely Leonine.

The idea is this. Suppose the node '@common code for x tests' contains a list of nodes, each of which defines a class or function to be shared by unit tests. A unit test can gain access to the compiled code in these nodes as follows::

p = g.findNodeAnywhere(c,'@common code for x tests')
    script = g.getScript(c,p)
    exec(script)
    h = Hello('Bob')
    assert h.name == 'Bob'

Let's look at these lines:

1. The first line finds the node whose headline is '@common code for x tests'. As usual in a Leo script, 'c' and 'g' are predefined. 'c' is bound to the Leo outline itself, and 'g' is bound to Leo's globals module, leo.core.leoGlobals.

2. The second line converts this node and all its descendants into a script. g.getScript handles Leo's section references and @others directives correctly--I can use all of Leo's code-organization features as usual.

3-5 The third line executes the script in the context of the unit test. This defines Hello in the @test node, that is, in the unit test itself! There is no need to qualify Hello. The actual test can be::

h = Hello('Bob')
    assert h.name == 'Bob'

That's all there is to it. Naturally, I wanted to make this scheme a bit more concise, so I created g.findTestScript function, defined as follows::

def findTestScript(c,h):
        p = g.findNodeAnywhere(c,h)
        return p and g.getScript(c,p)

The unit test then becomes::

exec(g.findTestScript('@common code for x tests'))
    h = Hello('Bob')
    assert h.name == 'Bob'

This shows, I think the power of leveraging outlines with scripts. It would be hard even to think of this in emacs, vim, Eclipse, or Idle.

The difference in the new work-flow is substantial. Any changes I make in the common code instantly become available to all the unit tests that use it. I can modify shared code and run the unit tests that depend on it without any "compilation" step at all. I don't even have to save the outline that I'm working on. Everything just works.

Edward

Sunday, August 8, 2010

Leo in a nutshell

I have struggled for years to explain why Leo is interesting. Here is my latest attempt. I think it looks a bit better than usual :-)

Leo combines outlines, data, files and scripting in a unique way. As a result, it takes some time to get the Leo Aha. This page introduces Leo's features and argues that Leo truly is a unique tool.

Outlines and organization: Leo's outlines are far more flexible and powerful than any other outline you have ever used, for at least three reasons:

1. Unlike other browsers, you, not the browser, are in complete control of the outline. You can organize it however you like, and Leo will remember what you have done and will show it to you just that way when come back next time. If you don't think this is important you have never used Leo :-)

2. Leo outlines may look like other outlines, but in fact Leo outlines are views of a more general underlying graph structure. Nodes in Leo's outlines may appear in many places in the same outline. We call such nodes clones. Using clones, it is easy to create as many views of the data in the outline as you like. In effect, Leo becomes a supremely flexible filing cabinet: any outline node may be filed anyplace in this cabinet.

3. Leo outlines are intimately connected to both external files and Python scripting, as explained next.

External files: Any outline node (and its descendants) can be "connected" to any file on your file system. Several kinds of connections exist. The three most common kinds are:

1. @edit: Leo reads the entire external file into the @edit node's body text.

2. @auto: Leo parses the external file and creates an outline that shows the structure of the external file, just as in typical class browsers.

3. @file: Leo makes a two way connection between the @file node (and its descendants) and the external file. You can update the external file by writing the Leo outline connected to it, or you can update the outline by changing the external file. Moreover, you can easily control how Leo writes nodes to the file: you can rearrange how Leo writes nodes. To do all this Leo uses comments in the external file called sentinels that represent the outline structure in the external file itself.

All of these connections allow you to share external files with others in a collaborative environment. With @file, you can also share outline structure with others. Thus, a single Leo outline can contain an entire project with dozens or even hundreds of external files. Using Leo, you never have to open these files by hand, Leo does so automatically when it opens the Leo outline. Leo is a unique new kind of IDE.

Scripting: Every outline node can contain Python scripts. Moreover, each node in a Leo outline is a programmable object, which is easily available to any Leo script. Furthermore, the structure of the outline is also easily available to any script. Thus, nodes can contain programs, or data, or both!

Furthermore, Leo's headlines provide a natural place to indicate the type of data contained in nodes. By convention, @test in a headline denotes a unit test, @command creates a new Leo command, and @button creates a script button, that is, a Python script that can be applied to any node in an outline!

Unifying scripting, data and outline structure creates a new world. We use the term Leonine to denote the Leo-centric (outline-centric) view of programming, data and scripting. Here are some of the implications of of this new world:

Data organization: Leo's clones allow unprecedented flexibility in organizing data. Leo outlines have been used as an entirely new kind of database. It is easily scriptable. As my brother has shown, it is possible to design Leo outlines so that parts of the outline are SQL queries!

Design: With Leo, you always see the big picture, and as many of the details as you like. But this makes outlines ideal for representing designs. In fact, Leo outlines don't just represent designs, they are the designs. For example, all of Leo's source code reside in just a few Leo outlines. There is no need for separate design tools because creating a Leo outlines simultaneously embodies both the design and the resulting code. Furthermore, Leo outlines can also represent input data to other design tools.

Programming: It's much easier to program when the design is always easily visible. Nodes provide the perfect way to organize large modules, classes and functions. Nodes also provide unlimited room to save as many details an notes as you like, without cluttering your overall view of the task, or rather tasks, at hand.

Testing: Leo is a supremely powerful unit-testing framework:

1. You can make node a unit test simply by putting @test at the start of its headline. Leo will then automatically generate all the blah-blah-blah needed to turn the node's script into a fully-functional unit test. Oh yes, the headline becomes the name of the unit test.

2. Unit tests can use data in children of @test nodes. Typical tests put input data in one child node, and the expected results of running the test in another child node. The test simply compares the actual and expected results.

3. You can easily run tests in the entire outline or just in the selected outline. Because tests reside in nodes, you can use clones to organize tests in as many ways as you like. For example, it is trivial to run only those tests that are failing.

Maintenance and support: Leo's ability to contain multiple views of data is precisely what is needed while maintaining any large project. For every new support task and every new bug, a new (plain) task node will contain all the data needed for that task, notes, test data, whatever. Furthermore, when fixing bugs, the task node can contain clones of all classes, methods or functions related to the bug. Fixing a node in the task node fixes the node in the external file! And as always, you can use all of Leo's outlining features (including clones) to organize your task nodes.

Organization everywhere: Have you noticed that Leo's organizational prowess applies to everything? Indeed, you can use outlines and clones in new ways to organize files, projects, data, design, programming, testing, and tasks. Leo doesn't need lots of features--outlines, clones and scripts suffice. The more complex your data, designs, program and tasks, the better Leo is suited to them.

Scripting everything: Let's step back a moment. A single outline can contain databases, designs, actual computer code, unit tests, test scripts and task nodes. But Leo scripts will work on any kind of node. Thus, it is easy to run scripts on anything! Examples:

- Data: The @kind convention for headlines tells scripts what a node contains without having to parse the node's contents. The possibilities are endless.

- Design: scripts can verify properties of design based on either the contents of design nodes or their outline structure.

- Coding: scripts routinely make massive changes to outlines. Scripts and unit tests can (and do!) verify arbitrarily complex properties of outlines.

- Testing: scripts can (and do!) create @test nodes themselves.

- Maintenance: scripts could gather statistics about tasks using simple @kind conventions.

Tuesday, July 27, 2010

A design for inc-lint, an incremental pylint

This paper discusses the essential features of the design of an incremental pylint, or **inc-lint** for short. It discusses only those aspects that are essential for the success of the project. That is, it is the highest level design.

This design borrows some features from previous prototype of a new pylint, which in this paper I'll call **new-lint**. New-lint used a data-driven algorithm (I called it a sudoku-like algorithm) to do lint-like checking. Many features of this data-driven algorithm will reappear below.

New-lint was an “interesting” failure. It showed that a data-driven approach to lint-like checking is feasible. Alas, it's performance was comparable to that of pylint. This is strong evidence, imo, that the performance of the pylint can not be significantly improved without a major change in strategy.

To get significantly better performance, an **incremental** approach must be used. Such an algorithm computes diffs between old and new versions of files and generates the minimum needed additional checks based on those diffs. My intuition is that inc-lint could be 10 to 100 times faster than pylint in many situations. Inc-lint should be fast enough so that it can be run any time a program changes.

As an extreme example of an incremental approach, inserting a comment into a program should require *no* additional analysis at all. The only work would be to notice that the ast's (parse trees) of the changed file have not changed. More commonly, changes that do not alter the data defined by a module can have no global effects on the program. Inc-lint would check only the changed file. But these checks will happen in the presence of cached data about all other parts of the program, so we can expect such checks to be much faster than pylint's checks.

Inc-lint seemed impossible


It was far from obvious that inc-lint was feasible. Indeed, the difficulties seemed overwhelming. Aside from adding or deleting comments, any change to a python file can have ripple effects throughout an entire program. What kind of bookkeeping could possibly keep track of all such changes? For example, diffs based on ast's could not possibly work: the number of cases to consider would be too large. Building incremental features into pylint also seemed hopeless. The present pylint algorithms are extremely complex—adding more complexity into pylint would be a recipe for failure. In spite of these difficulties, a new design gradually emerged.

Global and Local analysis


The essential first step was to accept the fact that some checks must be repeated every time a file changes. These checks include checks that depend on the exact order of statements in a file. For example, the check that a variable is used before being defined is such a check. The check that a 'break' statement appears in an appropriate context is another such check. Otoh, many other checks, including *all* data-flow checks do *not* depend on the order in which definitions appear in files.

The distinction between order-dependent and order-independent checks is the key organizing principle of the design. This lead almost immediately to the fundamental distinction of the design: local analysis and global analysis.

**Local analysis** depends on order of statements in a Python file. Inc-lint completely redoes local analysis for a file any time that file changes. Local analysis performs all checks that depend on the exact form of the parse (ast) trees. As we shall see, the output of local analysis are data that do *not* depend on the order of statements in the parse tree.

**Global analysis** uses the order-independent data produced by local analysis. Global analysis uses a data-driven algorithm: only the *existence* of the data matters, how the data is defined is irrelevant.

This distinction makes an incremental design possible. We don't recompute global checks based on diffs to parse trees. That would be an impossible task. Instead, we recompute global checks based on diffs of order-independent data. This is already an important optimization: program changes that leave order-independent data unchanged will not generate new lint checks.

Contexts and symbol tables


A **context** is a module, class or function. The **contents** of a context are all the (top-level) variables, classes and functions of that context. For example, the contents of a module context are all the top-level variables, classes and functions of that module. The top-level classes and functions of a module are also contexts: contexts may contain **sub-contexts**.

**Symbol tables** are the internal representation of a context. Contexts may contain sub-contexts, so symbol tables can contain **inner symbol tables**. In other words, symbol tables are recursive structures. The exact form of symbol tables does not matter except for one essential requirement—it must be possible to compare two symbol tables easily and to compute their diffs: the list of symbols (including inner contexts) that appear in one symbol table but not the other.

Local analysis produces the **module symbol table** for that file. The module symbol table and all its inner tables describe every symbol defined anywhere in the module. Local analysis is run (non-incrementally) every time a file changes, so the module symbol table is recreated “from scratch” every time a file changes.

Deductions and the data-driven algorithm


The second output of local analysis is a list of **deductions**, the data that drive the data-driven algorithm done by global analysis. Deductions arise from assignment statements and other statements. You can think of deductions as being the data-flow representation of such statements.

Important: deductions *use* the data in symbol tables, and deductions also *set* the data in symbol tables. The data-driven algorithm is inherently an iterative process.

For example, an assignment a = b implies that the set of types that symbol 'a' can have is a superset of the set of types that symbol 'b' can have. One kind of deduction “completes” the type in the symbol table for 'a' when all types in the right hand side (RHS) of any assignment to a have been deduced. This deduction **fires** only when all the right-hand-sides of assignments to 'a' are known. Naturally, 'a' itself may appear in the RHS of an assignment to another variable 'c'. Once the possible types of 'a' are known, it may be possible to deduce the type of 'c'.

Another kind of deduction checks that operands have compatible types. For example, the expression 'x' + 'y' is valid only if some '+' operator may be applied to 'x' and 'y'. This is a non-trivial check: the meaning of '+' may depend on an __add__ function, which in turn depends on the types of 'x' and 'y'. In any case, these kinds of deductions result in various kinds of lint checks.

Global analysis attempts to satisfy deductions using the information in symbol tables. As in new-lint, the data-driven algorithm will start by triggering **base deductions**, deductions that depend on no other deductions. Satisfied deductions may trigger other deductions. When all possible deductions have been made, the remaining unsatisfied deductions generate error messages.

Diffs


Large programs will contain many thousands of deductions. We can not afford to rerun all those deductions every time a change is made to a program. Instead, we must compute the (smallest) set of deductions that must be re-verified.

To compute the new deductions, we need a way of comparing the data contained in the changed source files. Comparing (diffing) parse trees will not work. Instead, inc-lint will compare symbol tables.

Happily, comparing symbol tables is easy. Any two source files that define the same contexts will be equivalent (isomorphic), regardless of how those contexts were defined. The diff algorithm will be recursive, mirroring the recursive structure of symbol tables. We expect the diff algorithm to be simple and fast.

The output of the diff will be a list of created and destroyed symbols for any context. Changing a name (in any particular context) is equivalent to deleting the old name and creating a new name.

Caching and updating


Inc-lint will cache symbol tables and deductions for all files. This allows us to use avoid the local analysis phase for all unchanged files. However, changes made in the local analysis of a file may affect deductions in many *unchanged* files.

The update phase requires that we be able to find the “users” (referrers) of all data that might change during local analysis. Thus, we expect symbol tables and deductions to use doubly (or even multiply) linked lists. It should be straightforward (and fast!) to update these links during the update phase.

Preserving pointers


Diffing symbol tables will result in a list of changes. When applying those changes, we want to update the *old* (cached) copy of each symbol table. This will allow references to unchanged items in the symbol table to remain valid. Of course, references to *changed* items will have to be deleted to avoid “dangling pointers”. By taking care to update links we can use typical Python references (pointers) to symbol table entries and deductions. This avoids having to relink pointers to new symbol tables.

Recap


Here are the essential features of the design:

1. Inc-lint performs local analysis for all changed files in a project. This phase does all lint checks that depend on the order of statements in a Python program. The output of local analysis is a new symbol table for each changed file, and a list of deductions that must be proved for the changed file.

2. A diff phase compares the old (cached) and new versions of the symbol table. This diff will be straightforward and fast because symbol tables will be designed to be easily diffed. As an optimization, we can bypass the diff if the old and new parse trees are “isomorphic”. For example, files that differ only in whitespace or comments will have isomorphic ast trees.

3. An update phase inserts and deletes cached (global) deductions. Changes to a symbol table may result in changes to deductions in arbitrarily many files of the project. Thus, all symbol table entries and deductions will be heavily linked.

4. After all symbol table entries and deductions have been updated, a data-driven algorithm will attempt to satisfy all unsatisfied deductions, that is, deductions that must be proven (again) because of changes to one or more symbol tables. These deductions correspond to the type-checking methods in pylint. At the end of this phase, still-unsatisfied deductions will result in error messages.

Conclusions


This design looks like the simplest thing that could possibly work. Indeed, it looks like the *only* reasonable design. For simplicity's sake, local analysis *must* be done afresh for all changed files. In contrast, global analysis depends only on deductions and symbol tables, neither of which depends on program order. Thus, we can easily imagine that deductions that depend on unchanged symbol table entries (symbols) will not need to be rechecked.

This design consists of largely independent parts or phases. Difficulties with one part will not cause difficulties or complexity elsewhere. This separation into independent phases and parts is the primary strength of the design. Like Leo's core modules, this design should remain valid even if various parts change significantly.

This design seeks to minimizes the risks to the project. I believe it has accomplished this goal. It should be possible to demonstrate the design with relatively simple prototype code.

All comments are welcome.

Edward K. Ream
July 27, 2010