Page Created: 7/30/2014   Last Modified: 8/7/2023   Last Generated: 6/7/2024

Program Description

(This page is an extremely rough draft and is full of all kinds of errors. I will try to improve the documentation over time if I release future versions. Please note that this page was originally generated in HTML. If you are reading this as a text README file inside the source tarball, it will not contain the example hyperlinks.)

ScratchedInSpace is a Perl based static site generator with Web 2.0 features, such as wiki-style CamelCase linking, NoSQL-style metadata, including tags, hashtags, backlinks, key-value search, recursive transclusion, recursive macros, breadcrumbs, external link indicator, and integrates Textile markup. It can integrate with the ScratchedInTime commenting system to incorporate wiki-like editing.


perl - Reads text files (containing markup) from the /staticsite folder and generates html and css files and saves them to the /staticstaging folder.

perl publish - Generates site as above, but assigns public web server path to the files in the /staticstaging folder, then uploads those files to the public web server (via script).

perl status - Generates only a single site status page and assigns public web server path to the files in the /staticstaging folder, but it does not upload those files. It only copies the generated site status page from that /staticstaging folder, along with a unique status.css file, over to the /sitestatus folder. This provides a nice folder location to use for uploading those 2 files to an off-site host so that it can be reachable in cases when the public web server goes down.

Due to the various interrelations of the pages, it has to generate the entire site at one time.

Required Files and Directories (need to be in same folder as - Configuration file containing paths and other parameters.
style.css - CSS style sheet.
status.css - CSS style sheet for an off-site site status page.
template.tmpl - HTML page structure.
status.tmpl - HTML page structure for an off-site site status page. - Knowledge captcha questions and answers (used by ScratchedInTime only). - Plugin file that includes extra macros. - Bash script which uploads the generated html pages and the Perl scripts from the remote generating PC to a public web server. - Bash script that enables web browsers to auto-launch a text editor on the local PC when "Edit page" link is clicked in staging mode.
/static - Folder that houses the perl script,, and ScratchedInTime files (if applicable).
/staticsite - Folder that houses the text, css, and template files that create the site.
/staticstaging - Folder that houses the staging site, and any subfolders (images, files, etc)
/sitestatus - Folder that houses a staging index.html page and status.css file intended for an off-site hosting site (such as a 3rd party) in case the primary site goes down.
TagSearch - A file containing the Tagpage plugin must be installed in the /staticsite folder to allow hashtag search. This file can be renamed as long as it is also renamed in the file.

Perl dependencies

  • Cache::Memcached::Fast
  • Text::Textile

Linux dependencies (used by and/or


It is designed to be run from a remote PC in conjunction with the dynamic comment server, ScratchedInTime, but this is not required. If a $memcachedserver is populated, it also uploads captchas to Memcached for use by ScratchedInTime.

The remote PC doesn't have to be that powerful, but it speeds up generation. That is the beauty of static generators, you can leave the generation to a more powerful computer, but the pages themselves can be served on a less powerful one.

The pages from which it generates are just text files, named with CamelCase. In addition to Textile lightweight markup language, it also interprets markup for CamelCase, hashtags, macros, and metadata.

Essentially, it is both a personal Wiki and a database in static form. The main things this generator can't do that a dynamic wiki/database can do is perform a unique keyword search and render edits in real-time. But if you know the search term ahead of time, the static generator can simply pre-compute the search, transforming the problem from the time domain to the space domain, where static web servers excel. The hashtags work on this principle.

Subfolders can be created within the /staticstaging folder, such as "images" and "files". Except for these subfolders, the entire site is flat.

The /staticsite files are typed up in a text editor such as bluefish (or any text editor of choice). No spellcheck is performed, and must be performed by the text editor. Each file must be named with CamelCase.
Links to images should go to the /staticstaging/images folder.

It is ideal to create functions and aliases such as "edit", "stage", and "publish" in .bashrc (Void linux) as follows:

function edit() 
bluefish /staticsite/$1 &
alias publish='cd /static; perl publish'
alias stage='cd /static; perl; cd /staticstaging; python2 -m SimpleHTTPServer 80'

  • To edit a file, type "edit ".
  • To stage the site, type "stage" and it will launch a web server on the PC.
  • To publish the site, type "publish", and it will upload it to a public web server.

Being static, this system was designed to operate in 4 stages: Edit, Stage, View, Publish.

Wiki-style Edit

Instead of editing pages by launching a text editor manually, as mentioned above, when in staging mode, "Edit" and "Edit (remote)" links appears on each page. If this Edit link is clicked (and if the ScratchedInTime) system is used, it will send a parameter containing the name of the page to the dynamic commenting server, and that server will send back a custom MIME content-type page which includes the name of the page that is being edited. This prompts the browser to choose an application to open the file (which can be permanently set in the browser). In some versions/distributions of Linux, however, a .desktop file and new MIME type may need to be manually created. If the file is chosen as the application, it will open the page in the bluefish text editor (or any editor, if configured).

Alternatively, if you are on a remote PC that can access the generator PC over ssh, and if both PCs are running X-Windows, if you click the "Edit (remote)" link, it will launch that same text editor over secure ssh X-Forwarding. To simplify the command string, this feature requires that the ssh config file has already been set up with HostName, Port, IdentityFile, and that X-Forwarding has been enabled on the generator PC and key authentication is used.

This provides quick local and remote editing of pages as one reads them, similar to a dynamic wiki, yet it uses very little processing power and is more secure than enabling local links (file://) on the browser. No generator markup is ever exposed over the network. The only thing going over the network is the name of the file to open, and the local Linux OS and local text editor does the rest. If the dynamic server was compromised and an attacker changed the response of that server, it would simply send the wrong file name to the causing bluefish to open a different page or a blank page (which would be immediately noticed by the user). ScratchedInTime will not return page names that are longer than $maxcommentlength as set in minus 8, to account for the length of the "XForward" flag for remote edits. The default is set to 48 characters, which would allow a max page name length of 40. It also strips out non-alphanumerics. For extra security, the file also does its own length and alphanumeric validation and must be configured to match.

The rendering is not real-time and must be manually initiated. In publish mode, the Edit links disappear, as there should be no reason to edit the published pages, only the staging pages running on a web server on a local PC. When the staging site looks good, then it can be published.

The edit links are created on the template using the $editlink and $editlinkremote variables.

Template expansion

The HTML page rendering engine is used recursively, and it treats template files, such as template.tmpl, the same as normal pages, except for a few differences. It does not render textile within a template.

It renders HTML in this order:

  • Template Strings - It starts rendering the template, expanding strings within Macro code.
  • Template Macros - It then sends those string values to functions within the Macro code.
  • Template Renders - If the template Macro code contains a Render function (see below), it will render the entire page in place of that macro and enter it into the body of the template.
  • Page Macros - If the page being rendered also contains macros, those macros will also be rendered and inserted into the page.
  • And so on...

This is an extremely powerful recursive process which allows a tiny bit of render code to generate the entire site and allow for powerful macros.

However, it means care is needed inserting macros into the template, since the non-render macros on the template will expand at the same time as the rendered page and its sub-macros.

Meta generator tags

By default, the "meta generator" tag within the HTML on the pages is set to "ScratchedInSpace". These tags can be removed if needed by editing the template.tmpl template file before generating the site.

Markup and macros has its own markup which tries not to interfere with Textile, which includes CamelCase, metadata, and macros.

HTML can be directly added to the page. Textile markup can be used.

Macros are codes that can be typed on a page or template (.tmpl) file that expand to HTML, an extremely powerful function, especially when they are used with recursive search and metadata, which are the building blocks of a document-oriented database.

How the macro is called from the static page determines whether its output is expanded before or after html, static site and Textile rendering. During page rendering, each macro is "tokenized" at a certain position on the page, preventing further detection by other rendering layers until it is time to detokenize at that position.

The markup rendering order is:

  • Metadata is stripped out completely by the rendering process, not considered data to be rendered. Metadata begins with a period (.).
  • An HTML macro is tokenized first and detokenized last, rendered as pure HTML by the browser, skipping all other rendering. HTML macros will output pure HTML that I don't want the static web generator or Textile to interfere with.
  • A non-HTML macro is tokenized second and detokenized immediately. If need arises in the future, something can be inserted in between this stage. External link indicators and hashtag and CamelCase escaping is done at this stage (since the exclamation mark interferes with Textile)
  • Textile markup is converted at the third stage. Since non-HTML macros have already detokenized, they are fair game for Textile. This works out well as it allows the output from non-HTML macros to use Textile format if needed.
  • Hashtags, CamelCase and broken CamelCase are converted at the fourth stage using my regex code. Since Textile has already rendered, it has to be careful not to interfere. Ideally I would have made this the third stage, but Textile kept interfering with my own markup. So this is a temporary solution. To prevent CamelCase from being auto-linked, put an exclamation mark in front of the word, such as ThisExample.
  • Normal HTML is considered their fifth and final stage, performed by the web browser. The output of an HTML macro is actual HTML.

Non-HTML macros must be entered on the page as:


For HTML macros, replace the hyphens --- with ~~~ tilde signs.


To keep Textile from interfering with html macros, they must be surrounded with double equal signs:


The macroname is really just a Perl command, and can be anything that Perl will execute. However, my macro code is generally limited to Perl functions, but strings and if statements can be very helpful, especially in the template file. Strings can automatically send information into the macro, such as the current page name.

A typical macroname would be something like this:


...which is a function called comment that accepts the variable $currentpage as input. The $currentpage variable expands when the page is rendered.

This could also be written:


...which would simply output the name of the current page.

In Perl, functions begin with & and strings with $, which is what is going on here.

If they use tildes, they output pure HTML and are not affected by other markup, preventing expansion of CamelCase.

There are several built-in macros that form the core of a document-oriented style wiki, and additional external macros called Plugins can be created for extra functionality.

Macro nesting

Because non-HTML macros tokenize before the HTML macros do, you can nest an HTML macro inside a non-HTML macro.

Search Macros

There are several macros that rely on the search engine in, since they are all just various forms of search. They are Page Search, Pagename Search, Tag Search, Key-Value Search, and Backlinks.

Page Search

<---&search ("pagesearch", "keyword");--->

This is a built-in macro which performs a search for pages that contain a keyword in their text. It ouputs a comma separated list of CamelCase page names. It is a non-HTML macro, so its CamelCase names are automatically hyperlinked. This is the "accidental linking" power of CamelCase and why it is required for all page names.

For example, here are the pages on this site that contain the word Raspberry: OswaldBot, InformationInFlight, ScratchedInSpace, OswaldCluster, TinyRoomTinyWorld, DeadPiAudio, PeaLanguage, RobotDesigns, RsTwoThreeTwo, PacketRadio, LeeDjavaherian, VendingMachine, TrillSat, CommentSystem, MorseDecoded, TheThirdDimension, AboutThisSite, ThePhoneSystem, OswaldLaser, ScratchedInTime, IdThreePlugin, ChronicleOfBlogs

The static generator searched all pages for that word and displayed the results in place of the macro.

Pagename Search

<---&search ("pagenamesearch", ".*keyword.*");--->

This is a built-in macro which performs a search for pages that contain a keyword using (regular expressions) in their page name. It outputs a comma separated list of CamelCase page names. It is a non-HTML macro, so its CamelCase names are automatically hyperlinked.

If the keyword is set to match everything ".*", it will return a list of all pages on the site, like a site index.

Results of search for pagename Oswald
OswaldBot, OswaldCluster, OswaldLaser


<---&search("backlinks", "", "","$currentpage")--->

This is displays pages that link to the page. It is typically used in the footer section of the template.tmpl file so that it puts backlinks at the bottom of each page. It doesn't look for HTML hyperlinks, but for known page names. This works, since each page name in is CamelCase and it is automatically considered a link. The $callingpage variable must be used instead, if it is used within a template.


If the first character on a line in a page contains a period, then this tells that the line contains metadata. It considers the word directly next to the period as the "key" followed by a space, and then the "value". Metadata can be anywhere on a page, but it is ideal to keep it at the top where it is immediately visible. It is important to never start a line with a period unless it is metadata.

For example, if ".category downloads" was written at the far left of this page, this would create a key called "category" and associate it with a value called "downloads". The key and key-value association is a simple construct, but creates the basis for any database. All databases are relational, but not all are "relational databases", a term that applies to Codd and SQL-style databases. Data only becomes "information" if a human being applies some sort of association, taking it out of the randomness of nature and into human understanding. Placing key-value associations on pages creates a document-oriented database, a form of the NoSQL database, one of the most powerful information constructions known to man. The WWW arose out of simple associations (hyperlinks) and the Semantic Web may form out of simply key-value pairs. (The Semantic Web may have already formed, but that is another topic.)

Reserved Keys

There are two keys that have a special use in the system called "metadescription" and "headerextra". Keys using these names are not allowed and will cause problems. Any value assigned to the metadescription key will become the content of the HTML meta description tag. Any value assigned to the headerextra key will be added in the HTML page header section, in case you need to insert some HTML there. This allows information to be written to the header of that page. Otherwise, there is no easy way to do this without modifying the template file (which affects all pages and is not ideal for this purpose). These values are expanded when the template is rendered.

Metadata Searches

Searching this metadata is performed by the same search engine, slightly modified to work with this metadata. The Key-Value Search and Tags macros rely on this engine.


<---&search("keyvaluesearch", "key", "value");--->

This is performs a search for pages that contain a certain key value pair.

This is useful to create categories. For example, here are the pages that have key of "category" and value of "downloads": ScratchedInSpace

Tags and Hashtags

<---&search ("keyvaluesearch", "tags", "tagname");--->

Tags are a special kind of key-value search where the key is fixed as "tags" and the value is a particular tag. Tags are an important Web 2.0 technology, allowing wonderful "folksonomies" to form. One of the design goals of this tagging system was to avoid the use of a dynamic server and avoid dynamic code like Javascript, and it turns out that simple HTML anchors work well for accessing information. Time is simply converted to Space. Clicking through the links and anchors can even allow one to visualize patterns more quickly than a slow or overtaxed dynamic server or sluggish Javascript on a slow PC.

The value of any .tags key in the metadata can consist of multiple tag names separated by a comma and a space, such as ".tags apple, red, fruit, juicy" unlike normal key-value metadata which must only include one value.

Hashtags are treated the same as tags, except that they may occur in the body text and must be preceded by a number sign or hash symbol #. They include an anchor, which is used by the Tagpage (tag search) plugin. If they are preceded by two ##, then they become a non-anchored hashtag which is useful if you do not want tag searches hitting that anchor (such as example hashtags or the lists provided by the Taglist macro below). Hashtags must be preceded by a space and can only contain alphanumeric symbols. By default the TagSearch page includes the Tagpage plugin and should be added to the /staticsite folder on installation. Hashtags on any page will automatically become hyperlinks that link to an anchor on TagSearch. Hashtags will override CamelCase names and will take you to the TagSearch page. But once on the TagSearch page, the Tagpage plugin will auto-link tags to their CamelCase pages if those pages exist, adding another level of visualization. To prevent the accidental linking of the hash or number sign symbol # in normal body text, a hashtag may be escaped by preceding it with exclamation mark symbol. For example, this #word has been escaped and is not showing up as a tag at all. The exclamation mark actually interferes with the Textile markup, so internally it is converted to a + symbol to avoid conflict.

Results of search for tag "RaspberryPi":

OswaldBot, ScratchedInSpace, OswaldCluster, TinyRoomTinyWorld, DeadPiAudio, PeaLanguage, RobotDesigns, RsTwoThreeTwo, PacketRadio, LeeDjavaherian, VendingMachine, TrillSat, CommentSystem, TheThirdDimension, OswaldLaser, ScratchedInTime



While the Tagpage plugin returns the page names that contain a tag, the Taglist macro returns a comma separated list of the tags and anchored hashtags on a page. It then outputs the entire list as non-anchored hashtags.

For example, the tags and anchored hashtags on this page are: RaspberryPi, Perl, hashtags, NoSQL A tag search will not find these tags, since they are non-anchored, but these tags can be used to link to the tag search page defined by $tagpage.

Field Macro

<---&field ("pagename", "key");--->

This is a built-in macro which returns the value of a key-value field, for a particular page and key. It is a non-HTML macro, so its CamelCase names are automatically hyperlinked. It is also interesting in that if another macro is added as the field, depending on where this macro is expanded, it could lead to recursion.

Note that certain variables can be inserted in place of quoted names. For example, substituting $currentpage in place of the pagename will fetch the name of this page during the render process. However, since a template renders first and calls the render macro for the actual page, $callingpage must be used instead. So the correct use of variables depends on where in the rendering process the macro occurs.

Here is a field lookup for parent of this page, a key-value pair where the key is "parent" and the value is the name of the page that is the parent, an example of metadata used by the Breadcrumb macro: OswaldCluster


A ↑ icon appears at the top of the page that links back to the parent page, which is listed in the .parent field. This is an up arrow icon that corresponds to unicode U+2191. The .parent field must contain a valid page name to prevent broken links in the arrow icon. This allows a hierarchical arrangement of pages for guided navigation. If the parent page is set to the same name as the page, the arrow icon will disappear, which is useful for pages like a root or index page which has no parent.

External link indicator

If an external link is written using Textile markup, a ↗ appears to the right of the link. This is a northeast arrow icon (diagonal) which corresponds to unicode U+2197.

Comments macro

The comments macro is a built-in HTML macro used as follows:


The parameter is actually the comments page it is linking to, so in most cases this is $currentpage.

It will create a link to a comment page, if combined with ScratchedInTime.


<~~~&render("pagename", "$currentpage");~~~>

The second parameter is the calling page, which in most cases is the $currentpage.

One of the most under-appreciated technologies of Web 2.0 is transclusion which didn't catch on when the WWW first formed. This is immensely important for a document-oriented database wiki.

It simply displays the page within another page, allowing powerful recursion. Care must be taken to avoid runaway recursion, since there is no depth limit.


Plugins are complex macros that can be created and stored in a file called as Perl functions, and they also have access to the various rendering variables. The important thing is that the functions must return the string that is substituted by the macro.

Plugins allow the comma separated output from some functions to be used as input for others, providing amazing power like that found in document-oriented databases.

Below is an example of a custom macro called Programlanguage. It performs a key-value search of key "category" value "program", and then for each page returned, it shows the value of the "language" field of that page. In other words, it just lists the various languages used for all the programs listed on this site.


Here is the output:

Python, Bash, Perl 5, Perl 5, Perl

Other plugins include Tagpage, Yearsago, and Bloglink.


Tagpage generates an alphabetized list of tags on the site (with invisible anchors) along with the pages that contain that tag, containing links to that page and tag anchor which allows quick jumping to the section with the hashtag. This creates a static page for tag searches that would normally require a dynamic server. If this plugin is added to the page defined in the $tagpage variable, it will turn on hashtag highlighting to link to a list of pages that also include that tag.


Yearsago returns the approximate number of years since the origin year. It can be added to page text to eliminate the need to update relative year values as the years go by. It will update anytime the site is regenerated. For example, since we know that the origin year of the first moon landing is 1969, we can add the plugin here: It has been approximately 55 years since the first man walked on the moon. It is not month-accurate and only looks at year boundaries. Note that this value will not necessarily be accurate for the $pagemoddate, since time may have passed since the page was last modified, but it will be accurate for the last generated date ($lastgenerated), since the year value only updates during a page generation.

<~~~&bloglink ("link name", "blog date/time");~~~>

Bloglink is an html macro that returns a hyperlink to a particular blog entry on the ScratchedInTime server. The link name is the text that will be displayed for the hyperlink, and the blog date/time should be in the typical format used on the blog, for example "Fri Aug 1 05:18:53 2014", which is simply copied and pasted from actual blog entries (spacing is critical). The plugin will extract the year value and link to the appropriate page (MyBlog for current year or MyBlog[year] for archived years) and create the anchor to the particular entry, converting the spaces and colons to underscores to match the anchor format used.

When a current MyBlog blog is moved into archive in following years, the hyperlink will automatically update to its new location.


Variables that can be used within macros, either standalone or sent to a function:

  • Global
    • $sitedir
    • $scriptdir
    • $stagingdir
    • $serverdir
    • $memcachedserver
    • $memcachedport
    • $privateserver
    • $publicserver
    • $commentbasepath
    • $commentbaseurl
    • $templatepage
    • $tagpage
    • $captchaspage
    • $csspage
    • $spamringsize
    • $lastgenerated
  • During render
    • $currentpage
    • $callingpage
    • $pagemoddate
    • $editlink
    • $editlinkremote

This is a Bash script which uploads the generated html pages and the Perl scripts from /staticsite (one the remote generating PC) to a public web server. Bash was ideal for this task.

Normally, when "perl" is run from the /static directory, it generates pages that are meant to be viewed from from the /staticstaging folder.

But if "perl publish", it will generate pages that are meant to be viewed from the public server. Then it will run automatically to upload the pages. It removes any EXIF tags from the images folder in /staticstaging and only uploads any changes, since it uses rsync. It also sets up the correct file permissions for the public server. It will upload any subfolders created in the /staticstaging folder, and it will delete any files and folders on the public server that were not in the /staticstaging folder.

Normally this script does not need to be run by itself. It was separated from the since there are some times when you don't want to regenerate the whole site, but just make a change to the /staticstaging folder (like making updates to an image, template, css). Also, it does a lot of things that don't fit neatly in Perl that the underlying Linux OS does better, like running Rsync, ssh, exiftool. The perl scripts have been kept as free of extra modules as possible, either using custom code or relying on the wonderful Linux subsystem.

Running manually should only be done if the last thing that was run was a "perl publish" and not a "perl", otherwise it will upload the "/staticstaging" version of the site which was not meant for the public servers, and its hard-coded paths will break.

Captchas data file

The file contains a list of questions and answers for the knowledge captcha used by ScratchedInTime. The format is:

... and so on.

Each line needs to start at the left, and there just needs to be a colon (:) between the question and the answer.

Site Status page

If "perl status" is run, it will generate a unique "site status" page and rename it as an index.html file that is meant to be hosted on an off-site public server, along with its associated status.css file. This is in case the site goes down and status updates need to be provided. It uses a different template, status.tmpl, along with status.css so that the page can be sufficiently customized. It copies them to a /sitestatus directory so that just those files needed can be easily uploaded. So the site will have an archived copy of the externally-hosted page that can also be viewed once the main site comes back online (as long as the main site is "published" to generate the changes internally).

Known Bugs

There are all kinds of bugs in it, uninitialized variables, probably some malformed HTML, sneaky rendering errors that don't reveal themselves until you see how they recurse through several layers, errors loading/fetching Memcached, etc.

The common ones that I see are:

  • The automatic CamelCase, hashtag, and external link linking occasionally fails, especially ones near strange symbols or places, or multiple combinations on the same line. This is due to my poorly-constructed regex code that doesn't take into consideration all possible scenarios. Many people say NOT to use regex to parse HTML, that regular languages like regular expressions cannot parse non-regular languages like HTML. However, I decided to use regex to parse not only elements of HTML, but also my own markup, and even some Textile. Why? Because I love regular expressions↗! It is extremely fast and terse. It is an "ancient" programming language, related to finite automata, a "regular" language, which is not Turing Complete. But Perl regex is not purely regular expressions and has additions that make it more powerful, turning it into a Turing Complete language. I also make extensive use of the /e "eval" modifier, which allows Perl code execution from within the regex, and I even use the "experimental" Perl smartmatch operator inside the regex. But it can be a syntactical nightmare... My regex code likes to match clear-cut cases and may get confused if it sees potential matches surrounded by strange characters or end of lines. I haven't bothered to fix this yet since regex debugging takes time and I can just add an extra space around the link to force a match.

There are many static site generators on the Internet that you can download besides mine. If you want to run something dynamic and have a relatively fast server, use Foswiki↗ instead. They are under-appreciated and are close to perfection. Foswiki, is, in my opinion, the best wiki on the planet, a document-oriented database with a Web 2.0 design. The only thing I really wish it had is a native, flat file distributed search so it could scale horizontally.


Warning, this project is experimental and not recommended for real data or production. Do not use this software (and/or schematic, if applicable) unless you read and understand the code/schematic and know what it is doing! I made it solely for myself and am only releasing the source code in the hope that it gives people insight into the program structure and is useful in some way. It might not be suitable for you, and I am not responsible for the correctness of the information and do not warrant it in any way. Hopefully you will create a much better system and not use this one.

I run this software because it makes my life simpler and gives me philosophical insights into the world. I can tinker with the system when I need to. It probably won't make your life simpler, because it's not a robust, self-contained package. It's an interrelating system, so there are a lot of pieces that have to be running in just the right way or it will crash or error out.

There are all kinds of bugs in it, but I work around them until I later find time to fix them. Sometimes I never fix them but move on to new projects. When I build things for myself, I create structures that are beautiful to me, but I rarely perfect the details. I tend to build proof-of-concept prototypes, and when I prove that they work and are useful to me, I put them into operation to make my life simpler and show me new things about the world.

I purposely choose to not add complexity to the software but keep the complexity openly exposed in the system. I don't like closed, monolithic systems, I like smaller sets of things that inter-operate. Even a Rube Goldberg machine is easy to understand since the complexities are within plain view.

Minimalism in computing is hard to explain; you walk a fine line between not adding enough and adding too much, but there is a "zone", a small window where the human mind has enough grasp of the unique situation it is in to make a difference to human understanding. When I find these zones, I feel I must act on them, which is one of my motivating factors for taking on any personal project.

Here is an analogy: you can sit on a mountaintop and see how the tiny people below build their cities, but never meet them. You can meet the people close-up in their cities, but not see the significance of what they are building. But there is a middle ground where you can sort of see what they are doing and are close enough to them to see the importance of their journey.

The individual mind is a lens, but, like a single telescope looking at the night sky, we can either see stars that are close or stars that are much farther away, but we can't see all stars at the same time. We have to pick our stars.

I like to think of it like this:

It is not within our power to do everything, but it is within our power to do anything.

Source Code

Source code can be downloaded here.