Tuesday, October 02, 2012

S&T Cobra D+1 Scenario

This is a scenario for the Strategy and Tactics Edition 251magazine game "Cobra", which is itself a remake of the old classic SPI game "Cobra". The original game came with an expansion which covered the actual D-Day landings as well as the build up before the actual Cobra breakout. The magazine game re-presented this expansion in a more integrated format.

The problem with the expansion I have found however was that it was very difficult to get a historical or close to historical result for the landing, usually as a result of the beach combat table being very prone to dice luck (or not) as well as the movement sequence on the first turn not really making the actually allied advance inland on the 6th of June readily possible.

This scenario attempts to portray the situation at midnight on the 6th of June, effectively starting the game at D+1, or game turn 2. This allows for a battle of the build up from a solid historical base. Units are positioned as far as know to there locations at the start of the 7th of June, 1944, with historical loss levels.

The scenario creates the issues and opportunities that both sides faced on the 7th. The germans can hurl the 12 SS at one selected target, either the 6th Airborne lodging, or the Sword beach units and cause potentially considerable damage in spite of naval and air support. However, this risks leaving units to the west dangerously unsupported, and a major danger of a breakout east of Bayeux. In reality the safe course is to break up the panzer divisions and brace the damaged infantry divisions with them.

Around Omaha and Utah, containment is the word of the day, falling back to Carentan whilst trying to bring in more reinforcements.

On the allied side the days aims are to link Sword with Juno and remove the remnants at Douvre, whilst building up the British lodgement and attempting to link with Omaha and strengthen the 6th airborne. Omaha needs to slowly expand to allow more units to land. Likewise Utah needs to consolidate and land fresh infantry divisions to begin the drive to link to Omaha and also cut the peninsula.

Game Start
The game starts with the start of game turn 2. All rules for the expansion (i.e. the Build Up scenario are in effect). Weather for this turn (only) is fixed at Overcast. All invasion beach hexes have been captured by the allies.

Extra Rules
The following house rules I have found work well with Cobra.

14.6.5 Defensive Shore Bombardment
The allies may use defensive shore bombardment to help defend against axis attacks. Shore Bombardment defends at the same ranges as per offensive shore bombardment. However, only 1 shift can be gained from defensive shore bombardment  and an NSP expended on defensive shore bombardment cannot be then allocated in the Allied player turn to offensive shore bombardment.

14.5.2 Carpet Bombing The carpet bombing rules in Cobra don't really make sense as they disallow the allies attacking hexes being carpet bombed nor advancing into those hexes in the mechanized movement phase. Serious carpet bombing was almost always accompanied by a subsequent ground attack. The aim of the carpet bombing was to exploit through that area, the problem however being the inherent destruction caused to the road net. Therefore the following replaces 14.5.2
Allied units must attack the hex which is carpet bombed, irrespective of the carpet bombing result. No additional air power points may be used in this attack. In addition, one shift to the left benefitting the defender occurs, simulating difficulties in attacking through the churned up terrain. The allies may advance after combat into the carpet bombed hex if vacated, but no further irrespective of the result. During the subsequent mechanized movement phase, the carpet bombed hex costs 7 movement points for mechanized units or 4 movement points for infantry to leave. In addition, no road if any in the carpet bombed hex exists for that turn into or out of it.
If the the hex being carpet bombed is city hex, no exploitation from the hex is allowed.
The following lists the setup of units for the Cobra D+1 scenario. Units marked with a * have lost one step. Map B Locations are on the "Invasion" map (i.e. the map with Cherbourg and the landing beaches on it, whilst map A locations are on the "Breakout" map (i.e. the classic Cobra map covering the southern part of Normandy). Maps locations are on the S&T Cobra maps.


1st Infantry DivisionB3319*Omaha Beach East - Effectively has 1 regiment non-effective after landing
4th Infantry DivisionB2215Landed Utah - moved inland mainly wheeling north
29th Infantry DivisionB3118*Omaha Beach West - one regiment down due to landing losses
505/82nd Airborne RegimentB2016St-Mere Eglise
507/82nd Airborne RegimentB1815*Badly scattered and in swamps
508/82nd AirborneB1917*Badly scattered and in swamps
501/101 Airborne RegimentB2217
502/101 Airborne RegimentB2215*Badly scattered - Predominantly most of serials landed on northern side of Utah lodgement
506/101 Airborne RegimentB2218
2+5 Ranger BtnsB2816Point du Hoc - In reality only a few companies landed here, the majority landed on the west end of Omaha beach. 

3rd Infantry DivisionA3602Sword Beach - Advanced to Lebisey before successfully fending off 21st Panzer counter attack.
50th Infantry DivisionA3002Gold Beach - lead elements penetrated as far as the Bayeux-Caen Road before pulling back.
3/6th Airborne RegimentA3904
5/6th Airborne RegimentA3803
6AL/6th Airborne RegimentA3803
8th Armoured BrigadeB3820
27 Armoured BrigadeA3501
1SS BrigadeB3619Advancing on Port-en-Bassin
4SS BrigadeA3903Reinforced 6AB lodging after advancing across Pegasus bridge.

3rd Canadian DivisionB3302Juno Beach
2nd Canadian Armoured BrigadeB4320Juno Beach

1057/91B1816*Attempting to push on St-Mere-Eglise - battles of Chef le Pont causeways
1058/91B2015Attempting to push on St-Mere-Eglise from North with Stug IIIs.
6FJ/91B2119*Damaged in trying to advance on St-Mere-Eglise and also trying to disengage in early hours of 7th of June.
920/243B0811West Coast Division - largely did  not move on 6th of June
922/243B2114Advanced on 6th to north of Utah. Arrived at night.
914/352B2817Based between Isgny and Carentan. Belatedly arrived near Omaha beach and partially engaged Rangers at Point du Hoc.
915/352A2903*Spend D-Day chasing phantom paratroopers south of Bayeux before returning northwards late in the day to be badly dealt with by advance from Gold Beach.
916/352B3119*Main defensive unit on Omaha beach. Defended well but badly damaged by end of day.
729/709B1807Cherbourg Unit
739/709B1309St Pierre Eglise
919/709B2214Defended Utah beach in part for probably less than an hour, then northern flank of Utah lodgement
731/711A4101Advanced from Trouville on D-Day towards 6th Airborne lodgement
744/711A4201Advanced from Trouville on D-Day towards 6th Airborne lodgement
763/711A4301Advanced from Trouville on D-Day towards 6th Airborne lodgement
708/716A3202*This appears a misprint as there is no record of a 708th regiment of the 716 Infantry Division. I have tried to place it where elements of the 736/716 were retreated to.
726/716B3520*Elements of the 716 were on most beaches from Sword to Omaha on D-day. This regiment defended the eastern end of Omaha beach through to the Bayeux sector. By the end of the day it appears to have fallen back to the north west of Bayeux after being damage somewhat.  
736/716B4420*Douvres fortified area - 736 was split 3 ways. Part was left north of the 6 Airborne lodging, remnants fell back west of Caen, and some parts retreated into the Douvres complex. Basically almost destroyed on D-Day. Hard to place in this scenario. 
FJTB1208Reserve unit in the Cherbourg peninsula.
1 Flak RegimentB3019
30 Flak RegimentB1509
32 Flak BataillonB2618
922 Flak BataillonB1208
100/21 Panzer RegimentA3603*Damaged from advance to coast at Lion-sur-Mur in the late afternoon. A combination of 6lb ATGs, Sherman Fireflys and Priest destroyed or damaged 40+ of the regiments tanks. Withdrew to north of Caen at end of day.
125/21 Panzer Grenadier Regiment A3704Recon batallion of 21st Panzer sent against 6th Airborne landing area early on the morning of 6th, drawing in more forces. 125 PzG regiment placed north of Caen facing 6th AB late on the 6th.
192/21 Panzer Grenadier RegimentA3603Advanced to coast with 100th panzer regiment. Took surprisingly few losses. Fell back to north of Caen at end of day.
30 Panzer Batallion A2402Misnomer. Should probably be 30th Regiment or 30th Mobile group depending on sources. Advanced to Bayeux area on D-Day from Coutances area. Elements present near Omaha beach, but not heavily engaged. Location is approximate at best. 
12/12 SS Panzer RegimentA3605Road march to Caen on D-Day. Preparing for counter attack on 7th/8th which never happened.
25/12 Panzer Grenadier RegimentA3404Road march to Caen on D-Day. Preparing for counter attack on 7th/8th which never happened.
26/12 SS Panzer Grenadier RegimentA3807Road march to Caen on D-Day. Preparing for counter attack on 7th/8th which never happened.
LXXXIV Corp HQA1606In St Lo. Did not move.

Sunday, July 05, 2009

Blast from the past - HTMLR - HTML Refresh

Well time to start putting some posts on this blog for real. And I though i might start with a blast from the past. This is where my head was in 1997, when Dot Coms were just starting for real... This is the spec I put up to the IETF for a HTML Refresh language. Today you could most probably achieve a similar result with AJAX, but not nearly as easily or efficiently as if it was built into HTML and parse natively by the browser (but maybe a bit more flexibly and extensibly).

Anway, hope you enjoy and that it stimulates some more ideas:

Heres the original IETF Draft - http://tools.ietf.org/html/draft-rfced-exp-nelson-00

And the Text of it:

R. Nelson
12 October 1998



Status of This Memo

This document is an Internet-Draft. Internet-Drafts are working
documents of the Internet Engineering Task Force (IETF), its
areas, and its working groups. Note that other groups may also
distribute working documents as Internet-Drafts.

Internet-Drafts are draft documents valid for a maximum of six
months and may be updated, replaced, or obsoleted by other
documents at any time. It is inappropriate to use Internet-
Drafts as reference material or to cite them other than as
"work in progress."

To view the entire list of current Internet-Drafts, please check
the "1id-abstracts.txt" listing contained in the Internet-Drafts
Shadow Directories on ftp.is.co.za (Africa), ftp.nordu.net
(Northern Europe), ftp.nis.garr.it (Southern Europe), munnari.oz.au
(Pacific Rim), ftp.ietf.org (US East Coast), or ftp.isi.edu
(US West Coast).

Distribution of this document is unlimited.

Copyright Notice

Copyright (C) Ross Nelson (1998). All Rights Reserved.


This document describes HTML REFRESH, an EXPERIMENTAL language
and protocol for refreshing HTML pages and allowing serious
thin-client/server applications via HTTP [RFC2068].

1. Rationale and Scope

HTML forms have changed little in functionality or feature since
the inception of the HTML standard. Whilst HTML forms allow the
submission of form data from visible and hidden fields up to a
server side CGI program (or some derivative thereof), the results
must come back as a complete HTML page, either in the existing
window/frame or in another browser window or frame.

This is particularly tedious as the entire target page needs to
be redrawn, even if only certain data elements have been changed.
This has two very negative affects. Firstly, the bandwidth
requirements are increased as the entire page format must be sent
down to the browser again and not just the "field" data which has
changed. Secondly, the affect of redrawing the entire screen does
not allow the development of user friendly thin-client/server
applications (where the client is the web browser)and currently
leads to user disorientation.

Various browser "add-ins", such as "Java" have been developed
whilst HTML forms have largely been allowed to languish. This is
extremely unfortunate as by far the largest number of transactions
over the Internet occur via HTML forms.

This document specifies a HTML REFRESH language, which permits
the refreshing of the form data elements and images on a HTML
browser page without the redrawing of the entire page. This
allows serious user interfaces to be developed whilst using
less bandwidth to do so.

Future versions of this protocol may include extensions for
refreshing non-form elements of a web page, in-line with DHTML


The HTMLR language is built using the concepts of the HTML
language and is to be used in web browsers in conjuction
with HTML. Needless to say the main delivery method for
HTMLR is HTTP, with the use of a new mime-type.

2.1 HTTP Added mime-type

The HTTP would allow the following mime-type through to the
browser and the web server and browser would comprehend it.
The mime-type is :


which would denote the content which followed as a HTML refresh.
A HTML REFRESH aware browser would acknowledge the mime-type
and note not to redraw the target page from scratch but instead
integrate the results with it.

2.2 HTMLR Language

The HTMLR Language uses HTML like syntax to denote the refreshes
that are to be made to a HTML page. The following tags and
attributes are used to specify these refreshes. Each tag is
covered below with accompanying description and example.

It is anticipated that HTMLR response pages would be generated by
existing CGI (or like) capable programming languages, for example
PERL, ASP, COLD FUSION, etc. Such languages should be easily
capable of generating HTMLR and also changing the response


2.3.1 HTMLR

<HTMLR> ... </HTMLR>

The HTMLR tag denotes that the all tags and text until the /HTMLR
tag comprise a refresh of the existing HTML page/frame as
displayed by the browser. This tag is equivalent in import to the
<HTML></HTML> tags. Upon encountering a HTMLR tag, a browser
should not clear the existing HTML display page/frame, but rather
interpret the contents of the HTMLR tag and apply the relevant
processing to the current page.

Valid tags within HTMLR tags are specified in the rest of this

..... refresh tags ....


<WITHFORM NAME="form-name">....</WITHFORM>

The WITHFORM tag denotes which form the tags within it apply to.
The form-name specified with the NAME parameter must match the
name of an existing form on the currently displayed page. The
browser should treat all tags encountered within the WITHFORM
screen as dealing with the specified form where applicable.

Tags which are affected by the WITHFORM tag are SETINPUT,

If WITHFORM does not enclose these tags, they are deemed to be
relating to the first form on the current page.

.... refresh tags for person form .....



The CLEARINPUT tag clears all fields/checkboxes/radiobuttons/
textareas/buttons in the currently targeted form and resets them
to either empty or their default values. The targeted form is the
one specified in the enclosing WITHFORM tag, or in the absence
of this, the first form on the page. This should be processed in
sequence by the browser, thus any subsequent SETINPUT tags would
set the fields away from their default or empty values.

The EMPTY attribute sets the fields to empty whilst the DEFAULT
attribute set the fields to the original default value as
specified in the original HTML page.

<STATUS VALUE="Person Record Added">



The SETINPUT tag sets the input-field to the new-value specified
in the VALUE parameter. For radio button and checkbox fields, the
CHECKED/UNCHECKED parameter can be specified to alter the field
appearance. The field-name, specified in the NAME parameter must
match the name of a field (hidden/text/radio/checkbox/button) in
the targeted form on the current page. The targeted form is the
one specified in the enclosing WITHFORM tag, or in the absence
of this, the first form on the page. For radio button fields,
the new-value must also match the existing value of the named
field in the current form.

The HTML 3.0 proposed (but not widely implemented) DISABLED
parameter could also be used in SETINPUT, along with ENABLED to
dynamically enable/disable the input field.

<SETINPUT NAME="Name" VALUE="Fred Jones">
<SETINPUT NAME="Dob" VALUE="26/Jan/1971">
<SETINPUT NAME="Address" VALUE="35 Fred Street, Springfield">



The SETTEXTAREA tag sets the input-field to the new-value
specified before the closing /TEXTAREA tag. The field-name,
specified in the NAME parameter must match the name
of a textarea field in the targeted form on the current page.
The targeted form is the one specified in the enclosing WITHFORM
tag, or in the absence of this, the first form on the page.

The HTML 3.0 proposed (but not implemented) DISABLED parameter
could also be used in SETTEXTAREA, along with ENABLED to
dynamically enable/disable the textarea.

The comments for this record are these


<SETFOCUS FORM="form-name" FIELD="field-name">

The SETFOCUS tag set the input focus the field/textarea/selectlist
/checkbox/radiobutton-set/button with the name specified by the FIELD
parameter. The form the field is in is specified by the FORM
parameter. This tag is not affected by the WITHFORM tag as it
must set a definitive focus for the entire page, regardless of
how many forms are present.

<MSGBOX>You must enter an Name</MSGBOX>



The WITHSELECT tag is used to choose and set a select list
object in the current form. The field-name, specified in the
NAME parameter must match the name of a select list object
in the targeted form on the current page. The targeted form is
the one specified in the enclosing WITHFORM tag, or in the absence
of this, the first form on the page.

The DESELECTALL parameter immediately de-selects all existing
items in the select list. The REMOVEALL parameter immediately
removes all items from the select list.

The HTML 3.0 proposed (but seldom implemented) DISABLED parameter
could also be used in WITHSELECT, along with ENABLED to
dynamically enable/disable the SELECT list.





The SETOPTION tag is used to add, alter, or delete a select
list item of the current SELECT list object. The current select
list is the select list named by the last WITHSELECT within the
currently targeted form. The SETOPTION tag is invalid outside of a
WITHSELECT. The targeted form is the one specified in the
enclosing WITHFORM tag, or in the absence of this, the first form
on the page.

The ADD/DELETE parameter is used to add and delete items
respectively from the SELECT list. The SELECTED/DESELECTED
parameter is used to select/deselect an item after it has been
created, or if it already exists, to alter it.

See WITHSELECT tag example

2.3.9 MSGBOX

<MSGBOX {TITLE="title"}>message</MSGBOX>

The MSGBOX tag displays a centered message box to the user with
message supplied before the </MSGBOX> parameter enclosed in it.
The message box must be modal and have an 'OK' button to allow
the user to proceed. The browser should process the MSGBOX tag
immediately before parsing/processing any more of the HTMLREFRESH.
The optional TITLE parameter specfies a title for the messagebox

The text between MSGBOX and /MSGBOX tags should not contain HTML
formating and browsers may wrap the text as well as obey CRLF
combinations found in the text.

The MSGBOX tag allows for easy server generated intrusive messages
without affecting the browser page display.

<MSGBOX TITLE="Update Successful"
>The Record has been updated.</MSGBOX>

2.3.10 STATUS

<STATUS VALUE="status-line-value">

The STATUS tag is used to place the value specified in the VALUE
parameter into the status line at the bottom of the browser

The STATUS tag allows for another form of easy server generated
intrusive messages without affecting the browser page display.

<STATUS VALUE="Please correct the value in the Age Field.">


<PRINT {TO=printer-name} {ORIENT=orientation} {TRAY=traynumber}
<PRINTURL {TO=printer-name} {ORIENT=orientation}
{TRAY=traynumber} {COPIES=copy-count} SRC="url">

The PRINT tag is used to print HTML to the specified printer.
The HTML to print is supplied between the PRINT and /PRINT tags.
The print is sent to the printer specified by the optional TO
parameter. If no TO parameter is specified, a printer dialog
should be displayed for the user to select a target printer
from. Printing should occur in parallel to any other browser
processing. The TO option is of most value in an intranet

The ORIENT, TRAY and COPIES parameters are all options which
allow control over the printing process. The ORIENT parameter
can be used to specify "landscape" or "portrait" printing. The
TRAY parameter can be used to select a paper source. The COPIES
parameter can be user specify an number of copies to print. All
are optional and are most suited to intranet systems.

The PRINTURL tag functions the same as the PRINT tag in terms of
parameters, except that the content to print is supplied by the
url specified in the SRC parameter. The browser should open the
specified url and print the resultant stream as requested. The
printing method should be dictated by the mime-type returned.

Browsers should aim to support multiple PRINT requests in a
single HTML REFRESH stream.

The HTML allowable between the PRINT and /PRINT tags should be
of the same conformance level as the normal HTML supported by
the browser and print exactly the same as a user activated print
of a normal web page.

<MSGBOX>The person record will now be printed to your
"HP" printer.</MSGBOX>
<PRINT TO="hp01" ORIENT="portrait" TRAY="3" COPIES="1">
<TITLE>Person Record 123321</TITLE>
<H2>Person Record 123321</H2>
<B>Name:</B> John Smith<BR>
<B>DoB: </B> 14/Mar/1969<BR>
<B>Address: </B> 14 James St Smithville<BR>

2.3.12 BELL

The BELL tag makes the browser produce an audible or visible bell.

<MSGBOX>The server has detected an error.</MSGBOX>

2.3.13 SETIMG

<SETIMG NAME="image-name" SRC="url">
The SETIMG tag is used to set images to new images based on a new
URL. The "image-name" given in the NAME parameter must match the
name of an image on the current HTML page. The new image is loaded
into the same screen area as specified by the original IMG tag on
the original HTML page.

The browser will place the new image on the page in the same
location as the old image, with the same dimensions to avoid
page resizing.

<SETIMG NAME="EmployeePic" SRC="/images/employee/002012.jpg">

3. Operational Constraints and Implications

3.1 Web Servers

Web servers may require configuration to allow the text/htmlr
mime-type to be transmitted from the CGI program.

3.2 Web Browsers

Web browsers will naturally be required to support the protocol
with substantial internal changes. On reciept of a HTML REFRESH
of a given page, the page will not be redrawn but instead the
fields altered as required. The refresh should NOT be placed in
any history or "BACK" button cache as this does not make sense.

3.3 Javascript/VBscript Implications

Javascript/VBscript browser implementations could possibly be
extended to support an "OnRefresh" event in a similar manner
as the existing "OnLoad" event. This event would be triggered
upon receipt and application of a HTML REFRESH to the page.
Appropriate extensions to the HTML BODY tag syntax would need to
be made to support the "OnRefresh".

3.4 CGI Programs

CGI program authors would gain the freedom to write serious
thin-client/server applications with HTML REFRESH. For example,
a HTML page could have buttons to move forward and backward
though records in a database. Upon pressing either button, a
submission would be sent to the appropriate Web Server/CGI
program. It would navigate the the next/previous database row and
return new data for the HTML form fields using a HTML REFRESH.

This refresh would only alter the values in the HTML FORM fields
on the page, thus lessening bandwidth requirents, aiding
usability and removing redundant page redraws.

3.5 Security

HTML REFRESH pages would travel under HTTPS the same as HTML and
therefore enjoy the same security benefits.

4. Acknowledgements

Thanks in particular to Steve Aldred, Nigel Williams and last but
not least Joanna Ladon for encouragement and review.

5. References

[RFC2068] Fielding, R., Gettys, J., Mogul, J., Frystyk, H., and T.
Berners-Lee, "Hypertext Transfer Protocol -- HTTP/1.1", RFC 2068,
January 1997.

6. Author's Address

Ross Nelson


Saturday, October 06, 2007


Whats this about?

Ever wanted to show on a wiki page the information that you've collaborated on as well as some structured data from a corporate data source, external web service or some other system of truth? Well read on...


Recently I was working on an prototype Enterprise Architecture Wiki. Basically it contained information about applications, installations, infrastructure (software and hardware) and pretty much most of the IT capabilities of an organisation. Someone pointed out that some of this knowledge was already in another structured datasource, which was backed by a traditional database. This informaiton was reliable and pretty much the record of truth and what they thought would be good would be a fusion of that information with the power and function of a wiki.

So the basic idea was born. Fuse the knowledge reliability of traditional data sources with the power and function of a wiki. Note that I say data sources and not database. A data source could be a database, a web service or an external application, all positioned behind a web service. Think of a mashup between a wiki and, well, anything!

What it may look like in practice...

Imagine a wiki that has some details about high value customers that your elite sales team targets. We are talking an Enterprise 2.0 (thats Web 2.0 inside an organisation on an intranet) wiki here, where we have multiple pages, one per high value customer.

On these pages we put information about the customer, sales people add notes, ideas on strategies for up-selling, their known preferences, links to other customers who are know acquaintances, etc. The problem is we already have a customer database sitting on a corporate server.

Its the system of record for the customer, their contact details, basic sales history, etc. We dont want to duplicate that information in the wiki and have to update details in two places, or worse still let them get out of sync. Likewise our existing Line-Of-Business sales system is too inflexible to allow for collaboration on customer information or easy navigation.

Enter the WikiPart. On the wiki page we have some information that comes from standard wiki page editing. But also we have other information that comes via WikiParts from external data sources, external to the wiki that is. For example, the sections that have the customer name, DOB, contact details, current total sales worth etc, are brought in via WikiParts from our existing sales system database.

The rest of the page is strictly wiki. Users can create and edit, notes, informaiton, strategies and add wiki links as required.
This is just one example. But think about it. Theres probably a million others just waiting out there just waiting to be fused together...

Actual Implementation...

Im trying to run over how this might be implemented at a high level in MediaWiki, mainly because its widely used and also already has an existing rich syntax that could be extended.

In MediaWiki I see basically two items that wiki users would add to a page. The first is the WikiPart which provides the linkage to the web service that accesses the data source.

The second is the parameter substitution that allows the use of this data in the Wikis page. So at the top of the wiki page there may be something like:


This would cause the wiki server to call the web service "salesdata" with a message containing a parameter called "customerId with a value of 13266. The service would then query the sales system (or its database direct) and return a variety of values as part of the web service return message (such as the customers name and address contact details).

These values could then be parameter substituted into the wiki page. A typical wiki page that utilises this web service output may be as follows:


This customer has said previously he is interested in our [[super widget]] and [[wonder widget]] products but that the price was too high at that time. He is known to like golf and drives and old Jaguar.

As you can see I've glossed over the web service details and message syntax. For now I've tried to keep it high level to get feedback. In reality, most of what has been shown in the example above would be done on page templates.

You would probably set up a "Customer" page template with all the substitutions and initial wiki part in place. In the actual wiki page that utilises the template, the value for the customerId would need to be specified and utilised when the template is evaluated on the wiki server. Again i'll leave details till another time.

Other wiki products would have similar approaches. However some like the Wiki offering in Microsoft's Sharepoint 2007 may need a slightly different approach. It relies on an advanced page editor (think a very simple web version of MS-Word) and supports no other mark up other than wiki links.

It does however allow the Wiki page to be Web Part "edited" which allows you do put web parts around the wiki page (but not really in it), so there is scope for developing a .Net WebPart that does the job I described above, though the current look and feel may not be particularly integrated.

And for extra merit points....

The true pleasure and pain of WikiParts is when they become a two way street. Thats where wiki pages retrieve information from the web service, and also update it. When a page is edited, or maybe even created, the wiki server could place dialog boxes on the page for editing (or input) of values that are linked to WikiParts. On submission, the values would be sent via the WikiPart and web service to the datasource, whilst the standard wiki text would also be updated. The input fields would default to their current values on an edit.

Of course the pain here is the input validation. The data consistency of the underlying datasource must be preserved. It needs to enforce such rules as does the web service implementation operating over it and reflect any violations back to the wiki server in a reasonable manner. In addition, some level of client side data editing functionality (i.e. list boxes of possible values, range and field length checking) would have to be specifiable to provide users with a reasonable experience.

Again this is far in the future, and there are probably many WikiPart based wikis that would simply be read only from the external data source. But again the posibilities for merging structured and semi-strucutured data in enterprise wikis are enticing.

Wrapping Up

So there you have it, WikiParts. Merging information and data. Combining what previously was lost in Word documents in a file share and what was locked up in corporate databases, into one collaborative envrionment. And being web service based, almost any data external to the wiki can be accessed and merged into the displayed page.

Maybe a wiki product out there is doing this already, or something similar. There are literally hundreds of wiki products on the market or available as opensource or freeware. I havent surveyed that many and it wouldnt surprise me. I suspect someone may have also already hand hacked an opensource wiki to incorporate such a feature.

But I suspect this hasn't appeared in the mainstream products yet and it needs to soon, to completely unlock the power of wikis, collaboration and structured data combined for the first time.


Wednesday, July 18, 2007

New Blog - Enterprise Architecture by Wiki

I've just started a new blog devoted to my Enterprise Architecture By Wiki concept and it can be found here at http://eabywiki.blogspot.com/. I'll still be posting here from time to time.

Tuesday, January 23, 2007

Comet McNaught Take 4

My fourth set of Comet McNaught images. This time from a dark skies site about 35km NNE of Canberra. There was no cloud for the first 30 minutes after comet sighting but it slowly rolled in. Inspite of this some very nice images of the comet and the immense arcing striated tail, which im guess doesnt completely set till midnight.

Conditions were not ideal however due to blasting winds and a large amount of dust from the drought affected fields. The Moon is also just about to start having an affect on viewing. Please enjoy these shots and an Orion starfield for good measure!

Monday, January 22, 2007

Comet McNaught Take 3

Last night (21-01-07) the trough of bad weather crossing SE Australia cleared from the ACT region at just the right time. Within about 1.5 hours before sundown a hole developed to the SE and widened with only low cloud hanging on the bridabellas and passing scud. Comet McNaught again looked fantastic with the striations in the tail clearly visible and a giant arc to the north being faintly suggested. The nucleus was lost about 30 minutes before it would normally set in a cloud bank but it was about to enter ground haze anyway. The comet may have been slightly darker than the previous night but still incredible. A two day old moon was visible but quickly set. In another 2 days it will start to disrupt the viewing I think. The better photos are below.

Sunday, January 21, 2007

Comet McNaught 2nd Take

Here are my second set of mc naught photos from 20-Jan-07. Absolutely brilliant comet though i wasnt in the right location. Still great amazement and fun had by all. The tail stretched 15 degrees where i was but could only get so much in FOV due to street lighting around.

Monday, January 15, 2007

Comet McNaught First Take

The following are the first shots of Comet McNaught from Mt Ainslie in Canberra on the night of 15-Jan-2007. Conditions were at best poor with heavy smoke haze from bushfires up to about 5 degrees above the western horizon and decreasing from their up. I picked up the comet about 5 minutes after sunset whilst looking at an aircraft. Tracked all the way till around 8:50pm when it was lost by all in a heavy haze bank near the horizon (all but the man with the IR camera that is!).

Comet was naked eye visible for about 5 mins but only marginally due to poor conditions. It was visible in scopes and binoculars from sundown till about 10 mins before it set.

Shots were taken prime focus with Skywatcher 6" Refractor (fl=1200) using Cannon EOS 400D DSLR at various ISO and speed settings. More tomorrow night!

Click on images below to enlarge. Some nicer tail and nucleus detail in the enlargements.

Wednesday, November 29, 2006

Enola Gay

Just saw a program on the History channel about the atomic bombs at the end of the second world war. A quick check with Google Earth shows that the pits used to load the bombs are easily visible from space, as is the airfield. Well, most of tinian shows up as an airfield or base area for that matter.

The two pits are visible here:

15.083391993, 145.634355494

with the airstrips visible running east to west just below.

Whats with Gilze Air Force Base?

Years ago I cycled past Gilze Air Force Base in the Netherlands and almost got decapitated by an F16. It was my own stupid fault really as there were traffic lights on the road to stop cars whilst aircraft were on final. I rode past these to get a good photo head on of an F16 on approach. So close it was I felt the exhaust heat as it passed over and the ditch next to the road was utilised, much to the amusement of a certain cycling companion!

Last night I was looking at it in Google Earth only to be astonished that the Air Force base has been pixelated into oblivion. The road I was on is clearly visible in hi-res with cars going along it to the west of the west end of the main east-west runway. But the entire base has been crudely pixelated so as to be unviewable.

You can find it here:

51.5667226748, 4.93937926653

[Just push the "Fly To" tab in the top left of Google Earth, and paste in the line above and push the Search/Magnifiying Glass button next to it]

I really dont get this.

More major NATO bases such as Mildenhall and Lakenheath in the UK are completely visible in high res:

52.3636551681, 0.479026865934

52.403339953, 0.556697467456

So what the heck is at Gilze on that day that noone wants us to see? Or are the Dutch just paranoid? European Area 51? Secret Rendition flight on the ground? Anyone? Anyone?

By the way:

Have a look at the B17 on the ground at Duxford in flying condition:

52.0922586315, 0.129379759876

And the old WW1 fighter field at fowlmere back in use with a small twin on the ground:

52.0774049049, 0.061787171553

Monday, November 27, 2006

Did Australia Have the Bomb?

This is an article I've always wanted to write but have never seemed to get around to.

The following is a speculative timeline of Australia's dealings with the bomb and its potential operational control of a device. I have no real evidence that what follows ever happened as stated, but several factors, notably the lack of obvious payoffs for the atomic tests, the bomber selection shortlist and the bizarre race to build a reactor have made me ponder if Australia had operational control of loaned bombs for some time period. Some of the points below are common knowledge, some are covered in the ABC TV documentary "Fortress Australia" and some are based on what may be commonly called "pub stories". Again, I stress this is purely speculative.


Australia is used as the scene of the UK's nuclear development program. Multiple bombs are detonated on the Montebello islands off Western Australia, at Maralinga in South Australia and at Emu Field, also in South
Australia. One of the almost never explored aspects of these tests is what did Australia get for participation in the effort? Cynics might say a vast area of irradiated land and ongoing health problems for thousands of
veterans. Others might say our scientific establishment got to piggyback off UK nuclear research to some small degree.

But what if there was another clause to the agreement for the tests. An unwritten or secret clause that was never made public, and may never be for another 50 years? What is payment for the tests was the bomb itself?
Whilst this may at first seem mad let us consider what form it may have taken.

British bombs with RAF serial numbers and RAF markings may have been based in Australia. At the edge of an airstrip, such as at Woomera, they remain in hardened bunkers, guarded around the clock by RAAF personal, not knowing what was inside. The bombs are, British, but the understanding is that they are under Australian operational control if and when required for the defense of Australia. If an Australian Prime Minister was ever asked if Australia had the bomb he could correctly answer in the negative. If a British Prime Minster was asked if the bomb was ever proliferated to Australia, they could also answer in the negative.

And Australia definitely had a delivery system in the form of the Canberra bomber. Whilst not a perfect, it was more than adequate to deliver a small free fall device.


At some time in this period the British bombs are positioned in Australia. By 1961 the UK nuclear tests reach an end.


Australia decides to upgrade its main strike bomber fleet. The Canberras, whilst excellent aircraft are now outclassed by modern transonic and supersonic fighters so a replacement is sort. The short list of aircraft
selected includes the F-111, the Mirage-IV and the B-58 Hustler. The most amazing thing about this list is the types of aircraft on it.

The Mirage-IV and B-58 are solely supersonic nuclear strike bombers. The Mirage-IV was developed to carry Frances independant nuclear deterrant, before France had balistic missiles. The B-58 was developed as a replacement for the B-52 and was similar to the Mirage-IV in its mission, to deliver nuclear and thermo-nuclear weapons at supersonic speed through the Soviet air defense system. The last runner was the F-111. This aircraft combined the ability to deliver a nuclear payload with that of a non-nuclear heavy tactical and strategic bomber.

The fact that these three aircraft were on the short list points at what the selection criteria must have been: supersonic, multicrew, long range and nuclear capable. But why have a nuclear capable aircraft if you
didnt have the nuclear weapon to deliver?


Australia announces the selection of the F-111 as the Canberra's replacement.


Menzies goverment explores buying or building our own indigenous bombs. Buying is turned down outright by the US and UK whilst neither country will share bomb construction secrets with Australia either due to cold war spy leaks in Australia amongst other reasons.


China explodes first bomb


In October 1965 a new labour government is elected in the UK. The first since 1951.


Indonesia after lurching wildly around the political spectrum enters the year of living dangerous. President Sukarno has previously mentioned building an indonesian bomb whilst recieving the latest Soviet jet aircraft.


In January 1966 Robert Menzies steps down as Australian Prime Minister after 17 years.


In this time frame the UK withdraws from its loaned bomb agreement with Australia. Different governments with different outlooks on both sides of the agreeement could have contributed to this. Possibly a forward point in time, around 1970, was set for their withdrawal....


Australian government rapidly looks for local production options for an indigenous bomb.


Jervis Bay in selected as the site for australias first nuclear power station. Suggestions are made it is more aimed at producing bomb grade plutonium. Its addition to Australias power grid would have been minimal in terms of electricity.


A change of government in Australia, but not party, leads to the cancelation of the reactor development with only the baseplate cleared. Hightened defense ties with the US based on our involvement in the Vietnam war,
along with a lessening of regional strains have removed the imperitive to an Australian nuclear capability.


F-111s finally arrive in Australia.

So, did Australia have the bomb? I dont really know......

Wednesday, May 17, 2006

CISSP Inches Closer

After a few months off blogging im back. And a busy few months it has been. Yesterday I got an email from ISC saying they are processing my CISSP Certification. The exam was probably the hardest I have done in my life from an IT technical perspective and the amount I learnt in general doing it was colossal. I also have only 2 subjects to go in my Masters course so thats almost done as well.

More posts soon.

Thursday, December 22, 2005

Indigo (WCF) is going to be huge….

Indigo, or more correctly WCF : Windows Communication Foundation, is going to be huge. After taking a few hours out to read the latest MS white papers on it my immediate reaction was where and when can I get this! I’m currently envisaging the architecture of a large SOA (Service Oriented Architecture) and Indigo appears from even a quick look to be a very good fit for rampantly decreasing the amount of infrastructure coding that would need to be done.

Basically, if you are working on and SOA project, or an SOA/EDA (Event Driven Architecture), there is a multitude of plumbing tasks that needs to be done to allow services to be easily delivered when you get rolling. This includes things like communication endpoint configuration, implementing the correct WS-Security facilities, ensuring these are followed, having common mechanisms to achieve these things irrespective or whether the service is called via a queued or web service interface, handling distributed transactions, etc.

WCF does all this and more. The two main things that I liked about it are the following:

Firstly, you have independence of transport/protocol/binding from the actual service. So if you want a service to be called via MSMQ for some operations, and via web services for others, that’s all supported. In fact WCF appears to provide a commonality across standard web services (think ASMX), MSMQ, Remoting and Serviced Components. The configuration couldn’t be easier and all the WS option you ever wanted, such as security and reliable delivery as well as transactional control are all there.

I can see this halving or more the amount of infrastructure coding that would be required compared to an SOA project that was started today without WCF.

Secondly, MS has taken a big step up from the web services that were provided from VS2002 (.Net 1.0) onwards. Under WCF the tools generate interfaces rather than concrete proxy client and server classes.

Up until and including 2005, Visual Studio and associated tools always generated concrete proxy classes, which meant you wound up with what was a static and frustrating class that you would need to somehow encapsulate further on the client side if you wished to provide added methods for client applications to easily use. In affect you would need to build another entire layer on the client side to gain ease of use, but with much more cost.

Now with WCF you get a client interface class and you simply implement it in your concrete client class. You can do likewise on the server side. Thus both sides are singing from the same hymn sheet and both sides of the contract can implement as they see fit. You can still let the tools generate the proxy client and server classes if you wish, but the tools sensibly do this as a concrete implementation off the generated interfaces.

I personally can’t wait for WCF and will be experimenting with it at home as soon as I can. If you are doing serious SOA/EDA applications development in any environment, check out WCF as soon as you can. If nothing else you can ensure that your architecture is compatible with WCF so that cutting over to it at a future date will be relatively painless. If you don’t get a heads up on WCF, you probably risk reinventing what is going to be a pretty big wheel when it arrives!

An excellent paper on WCF for developers through to architects can be found here:


Thanks to Nils van Boxsel from MS Canberra for pointing me in the right direction for WCF.

Wednesday, December 07, 2005

Would you call this a Windows Security Hole?

We've all done it. Suddenly remembered we are late for the bus, the tube, the subway, the metro, the dinner date, the aniversary, the pub. We decide instead of just locking the work station with WindowsKey-L or with CTRL-ATL-DEL -> Lock Work station we had better actually log off or shut down. I've got 12 windows open but what the heck. There was nothing important or unsaved there.

After we see its on the way down we bolt out of the office. ZAP! Believe it or not this might be the simplest security issue in Windows and you've fallen straight into it, exposing your PC and company to the simplest of intrusions.

The other night I shutdown my XP PC and went to bed, only to find a whining noise and glowing screen greeting me in the morning. What the heck? Well it seems I had an unsaved word document open and the whole OS was sitting at a "Do you want to end this task?" dialog. The terrorfying thing was I could press Cancel, then press Cancel at the underlying "Do you want to save this file?" dialog and hey presto I was still logged in!

This got me thinking. Was this just my XP PC? So I have since tried this on Windows 2000 workstation and Windows 2003 Server and its seems the same. In fact you can probably try it at your PC right now with nothing more than WordPad. Here's how:

Start up wordpad. Type in a line of text. Go to the Start Menu on the task bar. Select Log Off or Shutdown. Confirm the Logoff. Within a few seconds you will get the Wordpad save dialog, and after about 15 seconds an End Program? dialog. And there you will stay. And with just 2 presses of a cancel button you are back to a desktop as the last logged in user!

Even more insidious seems to be that the screen saver does not cut in correctly. I had the basic starfield screen saver set at 1 minute. After I push cancel at the End Now? dialog I was left with an odd screen with just my background picture. Quickly pressing CTRL-ALT-DEL and the desk top came back, cancelling the task manager I was back in.

This also seems to happen even without any obvious programs running. Sometimes some not so well written freeware programs running in the background or dubious drivers for some peripheral take a dislike to being sent the Windows terminate event. Again windows appears to be left at the End Now? dialog.

Now imagine if that spiteful co-worker, that guy from the next cubical who is not cleared to your level, that nosy security guard on his rounds, that cleaner who's not really a cleaner or his kid who knows all about these computer thingys from school decides to just have a play on your machine. You can guess the rest.

A Windows logout or shut down should be just that. Irreversable, final. If you have something open and unsaved, its your problem, you should be warned but the waiting programs should be terminated within the shortest possible time. Any background or drivers that refuse to play the game should recieve similar short shrift from the OS. Maybe there is a setting to cause this to happen but I dont know of it. Please tell me if its so. Maybe this is all history in Windows Vista, but I havent played with the Beta yet to find out.

Yes, I hear you say, users should be more careful and stick around for the whole 2 minute+ shutdown to complete in some cases. But the OS should at least meet them half way and save them when they dont. Advanced Passwords, Certificates, 1024 bit encryption, SSL, WSE 3.0, Code Access Security and a host of other security solutions are wonderful weapons for cybersecurity in the 21st century.

But when we dont even log off even when we think we have, its all a bit of a mute point isn't it.

Friday, November 25, 2005

Fixing the Global.asax in ASP.NET 2.0

For some reason ASP.NET 2.0 has gone away from the code behind model to the script model for its Global.asax pages. I ran into this when looking at loading configuration data into global statics on application start in a web service.

When I created a default web service project with Visual Studio 2005 first I found no global.asax page was created. Why its not there by default I do not know as you could easily remove it if not needed, and a empty skeletal one has no performance impact anyway.

To add a global.asax, (or web.config for that matter), simply right click the web site or web server project in your solution explorer, select Add New Item from the context menu and then choose Global Application Class (or Web Configuration File for a web.config).

Happy that I finally had a Global.asax in my project, I opened it only to find that there was no class present! With the scripting page model you get a skeletal global.asax something similar to:

Click to Enlarge

The problem as i see it is two fold.

Firstly there is no class object that can be easily referenced from elsewhere in your application. If you were to load configuration information in the application_start event and store it in a global static, you cannot access that static from elsewhere in your application there is no class name to reference by.

Secondly, as there is no class in the global.asax, you cant use "using" statements in the code easily. There may be a way to do this but it doesnt instantly come to hand. If you place a using before the script tag, the keyword isnt recognised. If you place a using after the script tag its not allowed as you are already inside an implicit class!

After a roam on the internet I came up with an excellent reply post by Scott Allen that showed that you can switch back to the code behind model using the inherits attribute to the Application tag:

<%@ Application Language="C#" Inherits="Global" %>

The only other step is to copy the code within the script tag to a normal class file called Global.cs which you add to the project. Then delete the script tag. An example follows:

Click to Enlarge

Thus I now have simple and consistant use of the using keyword along with a class that is accessible from the rest of the web service to get at preloaded configuration information. My config value in this example can now be access by :


All this is of course still possible within the script model. You dont have to use using statements and can explicitly denote classes. You also can probably store configuration items into the application cache collection instead of your own statics. However, when migrating some older applications that were written using statics, the fact that you can still use the code behind model is quite usefull. Personally I prefer the older method as it provides consistenct and I dont think a dynamic recompile on changing global.asax values is that useful or even appropriate.

But I am interested in knowning the rationale for why the VS2005 ASP team have gone down this change path.

Tuesday, November 15, 2005

Custom Policy Assertions

Have been doing some work looking at WS-Security and WSE 3.0 Custom Policy Assertions. Until now documentation has seemed a little sparse, probably due to the RTM of .Net 2.0 and getting WSE 3.0 out the door, but have just found two very worthwhile links appearing on MSDN:

Creating Custom Policy Assertions


How to secure an application using a Custom Policy Assertion


Monday, October 31, 2005

Earth to NDoc...

NDoc is a fantastic documentation autogeneration tool for .Net, specifically it generates the documentation from the tripple slash (///) xml comments you can place above programmatic elements in C# programs. The output from it is stunning. Not only is it MSDN Library quality, it links in with MSDN. So if you have a "string" parameter to a method, the Ndoc output will hyperlink this to the definition of the "string" class over in the MSDN documentation.

As im working as a solutions architect on a new project and we will be rapidly approaching technology selection point, i'm compiling some options for going with MS or other technologies, and if within MS, which development environment, either .Net 1.1 or .Net 2.0.

Having a look at NDoc the other day I found that support for .Net 2.0 isnt their yet. Its a great product but like all Open Source efforts is suspectable to the ability of the progenators to find time to work on it. Anyway, after unsuccessfuly trying to find out what was going on with NDoc for .Net 2.0, my old workmate Tom Hollander (now in Redmond, i think) finally managed to dig up the following:


This explains to some degree whats going on with NDoc and .Net v2.0.

I've also posted on a few forums wondering why NDoc hasn't been internalised into the MS VS product suite as NUnit has been.


It would make an equally great addition in my opinion. If anyone has some thoughts on this line, please pile in.

Tuesday, August 30, 2005

Rossco Posted by Picasa

Tough day for the "Big Easy"

Sounds like New Orleans and Missisippi have been pounded real bad by the hurricane. Makes me put a better perspective on having to scrub a days flying when the wind is 10kts too fast. Hope the toll is not as bad as it sounds or looks.