Planning to share data in Applications


Regardless of the hardware platform which we use to obtain information, officials in both the Public and Private Sectors have long thought that their credibility hinged on local familiarity. This, in turn, created a pressing need to fake local familiarity, and continuing entertainment for the real locals.

Documenting Applications with multiple data sources are needlessly slowed down, or worse, the Applications might not be documented at all if the need for training and the length of the learning curve are the only considerations. An intuitive interface cures all ...

Not exactly.

In the real world, the same application can and should be reused to propagate and leverage progress. The spread of a good thing is a good thing. While this is obviously true, it should be noted that production of better things is little affected by the usual mechanism of economic competition.


So, how does one make a good thing travel well ?

Briefly, design or retrofit the Application so as to eliminate as many risks of travel as possible. The biggest travel risk for an Application is the loss of local familiarity. The temptation is to grow into serving multiple locales. This strategy is very difficult to design, and a risk to implement.

Reformation of a large organization's data sources should not be necessary. However desireable such a reformation might be, it will introduce an extra dose of uncertainty.

The two examples describe an XML version of a data transport scheme which is the basis for the World Wide Web.

This discription of the highway, instead of the vehicle using it is helpful because some data travels in bulk and some data is useful in its entirety. Plain Text is the latter kind and locale specific data the former.

This seems to pick economic winners and losers, from which not-for-profit organizations shy away. However, this (strawman) argument is well undercut: data cannot make an Application understand any new features with which it is endowed just as Applications have no awareness of the route the data took to reach it or even the fact that it is faking local familiarity to begin with. Anthropomorphism is two one-way streets which seem to go in opposite directions. Best Practice: Just pick one and go nowhere.


The collection of data sources is XML generated by a PERL script. This is the simplest way to apply the "Regular Expression" split to data set location Identifiers. At this point, the text descriptions are empty and Regular Expressions have proved themselves bad magicians by having failed to confuse an identifier with a query (in some cases). This ability is very helpful to the Semantic Web, but only if transparent. A toll road is only harmful if you do not realize you are on a toll road.

The Plain Text description is then written with meta characters describing the path from the data source to the Application. In the abstract, the back-and-forth motion of data and requests for data are described by the DURI (Application to Source) and the LURI (Source to Application) paths. This is an engineering convention, with props to the Protein Chemistry (d,l isomers) a working Application closely resembles (c.f. MashUps101.pdf, MashUps101t.pdf).

The first example is Plain Text in the UTF-8 Character Set. Although any glyph or language is can be encoded, there are a small number of characters with a conceptual meaning for the data which require a loose grouping with specific detail.

The second example is various data points obtained from various eGovernment Agency Gateways. Any Private Sector Gateway can share data in the same fashion. All security is perimeter security.

Application Developers can validate data sources in this format to insure interoperability.



The implementation of a Web Application requires a strategy or plan to move the Application coverage.

Questions ? [ ◊ gannon_dick AT yahoo DOT com ]