Tuesday, February 28, 2023

Coding Kick-off Meeting

Memories of a meeting that could have been helpful when we were a team working on Clicker. It actually happened with a EU project a bit later

  • it was our ordinary 3-days-midweek meeting
  • almost no professors involved, but everyone who would program anything for the project was present
  • we came with machines ready to build the equivalent of a 'hello world' for the feature we're targetting. For Clicker, if we were to build a sound mixer, 'hello world' would have been being able to output a square wave. If it was to make a configuration tool on a foreign OS, 'hello world' would indeed be a window and an 'Hi.' button that tells it to proceed
  • we would not try to have a polished outcome (that would be the job of a smaller, dedicated team, starting after the meeting) but to identify all as many as possible of the issues that could block the smaller team and prototype a solution to them. For the sound mixer, applying volume per channel could be left post-meeting, but deciding how to tell the channel's volume from a control pannel has to be prototyped. For the configuration tool, reporting ongoing progress bar must be prototyped, 
  • people don't need to know the same technologies. If one coding team will use ncurses while another uses WxWidgets and two others go for electron, that's fine.

Now imagine if I had heard of that 'sprints' back then:

  • Every morning, we spend one hour deciding what we are trying to achieve by the end of the day, what are the goals that are candidate to prototyping and which should get our attention first
  • Each coding team spends 4-6h trying to prototype what has been decided
  • (my 2 cents: If one team is done faster, they can use the remaining time to study the technologies picked by others, how they address what they've just done with that, rather than trying to tackle more objectives)  
  • The end of the day is used to review what has been achieved, identify strengths and weaknesses of each approach, and update the list of candidate goals for the next day. "Oh, that makes me think: we'll definitely need a way to pick a file for ${purpose}". We'll see tomorrow morning if that is important enough to be one of the prototype goal of the meeting.

(Hopefully, at the end of the meeting, everyone has a better understanding of each others' skill with their technologies and some of the weaknesses of the technologies. Hopefully enough to decide the team and technology to actually implement the features)


Tuesday, November 24, 2015

gdbm perl tools ... could they save the lost wiki ?

Clicker was the first of my projects to use a Wiki. And the last time I thought about an "information browser" program. Yet, the phpwiki database was broken on a regular basis. I built gdbmpatch and gdbmshow helpers to know how I could repair it. The Clicker project sort of died when a last update to the sourceforge policy made the phpwiki no longer working.
#!/usr/bin/perl

# a wiki file is made of different keys for each page. The content is 
# a sort of "bencoded" file with following rules:
#   * s::"" encodes a string
#   * a:<#items>:{<;-separated content>}
#   * i:

# p contains the html cache of the page (compressed)
#   available keys are $_cached_html and !hits

# li contains "backlinks" as a simple array (keys are integers, values are page names)

# lo contains "page links", same structure backlinks

# v: contains one of the page's version.
#   $author and $author_id tells who wrote the page
#   $summary, !mtime tells more about the page.
#   $pagetype should be "wikitext" and "%content" is the whole content.
# note that if version i exists, versions 1..i-1 should exist too.
# obsolete versions have an additionnal "_supplanted" key.



use GDBM_File;
use PHP::Serialization qw(serialize unserialize);

tie %file, 'GDBM_File', $ARGV[0], &GDBM_WRCREAT, 0640;

print "tied $ARGV[0]. items: ".keys(%file)."\n";

delete $file{pResources};

untie %file;
#!/usr/bin/perl

# a wiki file is made of different keys for each page. The content is 
# a sort of "bencoded" file with following rules:
#   * s::"" encodes a string
#   * a:<#items>:{<;-separated content>}
#   * i:

# p contains the html cache of the page (compressed)
#   available keys are $_cached_html and !hits

# li contains "backlinks" as a simple array (keys are integers, values are page names)

# lo contains "page links", same structure backlinks

# v: contains one of the page's version.
#   $author and $author_id tells who wrote the page
#   $summary, !mtime tells more about the page.
#   $pagetype should be "wikitext" and "%content" is the whole content.
# note that if version i exists, versions 1..i-1 should exist too.
# obsolete versions have an additionnal "_supplanted" key.



use GDBM_File;
use PHP::Serialization qw(serialize unserialize);

tie %file, 'GDBM_File', $ARGV[0], &GDBM_WRCREAT, 0640;

print "tied $ARGV[0]. items: ".keys(%file)."\n";

foreach(keys %file) {
  next if !/$ARGV[1]/;
  print "$_ ==> $file{$_}\n\n- - 8< - -\n";
#  delete $file{$_};
}

untie %file;

Thursday, July 12, 2012

Drop the question mark ...

In its original design, and up to version 3.0 distributed with the 3rd edition of his book (2005), Andrew Tanenbaum's MINIX was a single-address-space operating system. Granted, you could have multiple process running simultaneously and they're isolated from each other, but they all happily frolic in the same address space, and only the segmentation unit of the x86 processor prevents the havoc from happening.

It's not necessarily a bad thing: paging introduces overhead in address resolutions - especially when your virtual-to-physical translation buffer is no longer sufficient, requiring up to 3 cache misses before you get a single byte of data. Its impact on context switching is even more frightening than this: whenever you wake up another process, the whole translation buffer has to be flushed (okay, *some* pages -- usually those holding the kernel -- can remain sticky).

Not having paging has a huge impact, however: you can't build any sort of modern virtual memory. No partial swapping of some unused part of the software... and no "statistical allocation" of the memory. Much like in MS-DOS times, if your compiler *could* need up to 16Mo of RAM, you must give it all from the start. If it happens to need only 8Mo for the file you're compiling and that you need those other 8Mo for doing something else meanwhile, well, too bad for you.

But it has changed since then. Between 2008 and 2010, people have been adding a "virtual memory server" to Minix, which in turns remembers me of my "pager2" service in Clicker. It let you control mapping of your address space (do_mmap, do_mmunmap) and supports the process manager (do_fork, do_exit), but here, it also directly handle page faults through message passing. Again, it's very "memory object"-like (another key Clicker concept), which shouldn't be a surprise: I got the idea of the memory object while reading the other Tanenbaum book about operating systems :)

Altogether, I'm absolutely not convinced that placing the address space management into a server rather than into the microkernel was the best design choice one could make. Granted, 'forking a process' is something that could happen out of the microkernel, but pagefaults ? ...

Iirc, I had the idea of letting the Clicker microkernel know that some physical pages had been 'pre-allocated' to some memory object (which you can think of as 'virtual regions'), and only notify the server-level code about a miss when that pool is exhausted... a sort of hybrid micro/exo kernel. But I dropped the whole project before I got to that point.

One of my major "errors" regarding the Clicker overall design, though, was that will to make the paging optional, despite it turned out that every other feature was depending on it. My module mechanism allowed me to do so, but it over-complicated the whole code, introducing the need for sophisticated book-keeping structures, run-time service replacement, and a mysterious "private heap" feature where linker scripts should have been enough.

Thursday, October 20, 2011

the lost Wiclicker

Back in June 2004, I stopped trying to have a documentation as static .html files, and turned towards a (php)wiki for Clicker documentation and research. It turned very useful to coordinate our efforts as DasCandy provided some sporadic help here and there, and whyme_t developped installer tools and modules for Clicker. I was so happy to see a team at least starting!!

The wiki is now unfortunately defunct after both a spam assault and my failure to identify a sudden bug in Php code. I have a last dump on my system, but I haven't managed to recover the content yet.

After the lost forums, it was somehow too much to try to recover things at all.

Wednesday, May 12, 2010

Hyper-Desktop Markup Language ?

Among the "funny ideas" I had for Clicker, many has echoed in other's mind and some eventually got implemented here or there.

I won't call myself as "spolied inventor of tags", of course. I'm not. Though I have to admit tags are 100% aligned with what'd have pushed for Clicker if I had any resources to make things move by just pushing them.

Now, let's have a look at the box #7 of this "clicker desktop mockup" I made up somewhere near 2K++ ... the "one-key-command-line" for quickly spawning things is now mainstream ... Even Windows has it (or at least as some plugin) where you type [ESC]word[ENTER] to search and launch your report editor rather than crawling through clobbered menus.

"Incoming" (last imported documents) and "favourite" meta-folders are of lesser significance given we've got the "download history" window of Firefox. But let's check out that box #7 ... It was claiming "users do tasks, more than they use applications". That thing is starting to change as well, the welcome panels of Thunderbird 3, Wireshark (Lucid release) and K3B (a while ago) being an obvious example of it. They also claimed "a task is performed using a collection of tools operating on a collection of documents". We haven't got anywhere near that so far, afaik. Do I want to integrate something to the wonderful Tomboy application ? I have to learn C# ... Do I want to have Gimp able to learn new key combos for filters ? I bet I'll have to dig into some GTK+ and maybe some scheme or Python would help me.

Yet, all I'll be doing somehow is moving boxes around, connecting wires between blocks of code, etc. This is something that should be as easy to do as writing HTML, but there doesn't seem to be any Hyper-Desktop Markup Language around.

-- edit -- Btw, I was about to attach a picture in a thundermail while thunderbird crashed at me because I was also moving directories around. *sigh*. Long is the road.

Thursday, May 6, 2010

shell companion windows

To some extent, it isn't necessary to write a whole new application for the futureshell. What we need is a connection to a "graphical tty" that is associated with the current shell, so that e.g. if I want to offer a preview of an image or show graphical relationships between items, timelines, etc., all I have to do is sending "drawing commands" to that companion window. Where the companion stands, whether it sticks and move along, what's its size, etc. can be managed by the window manager. That may not be as sweet as having "file descriptor #4" ready by default, but that'll certainly be easier to evolve to.

Tuesday, May 4, 2010

Formerly known as "FutureShell"

I once made a little mockup of "FutureShell", a sort of mix between Enlightenment terminal and nautilus. It had many "killer features" such as "intelligent icons" that can inspect the type of data you drop to them in order to move data to the "most appropriate place".

While working today, there's another feature I wish my shell/terminal had: transfer of configuration. I'd love I could somehow drag-and-drop the value of SSH-AGENT variable or the label of a directory so that it cd' to that directory in another window.

I wonder whether the SDL-for-perl could help me prototyping this ... since I've realised that I don't need to build up my own kernel / X server to experiment with document access.