This is a dump for miscellaneous ideas. Please contact me if you are interested in the current status of these ideas.
Some small ideas in user interfaces that I find nice:
Eine interaktive Echtzeit-Karte für den Campus der Universität Regensburg. Basiert auf offenen Standards: SVG, microformats, RSS, etc. Integration mit OpenStreetMap (?) Verschiedene Overlays, die miteinander kombiniert werden können:
Informationen nicht nur auf Karte sondern auch als Text, RSS, GPS-Waypoints, etc. History wird gespeichert, kann durchgescrollt werden.
Evtl. als mehrere Master-Projekte.
I want to have a directory for each research project,
Managing my own tasks is hard, especially as I have several roles (researcher, teacher, sysadmin, parent). While productivity methods like GTD are certainly helpful, I find it very difficult to determine whether I should prioritize grading overdue theses over fixing a broken RAID in a server, or prioritize work on a research paper over taking some time for playing with my children. Within one role, it is relatively easy to prioritize tasks. However, when one has to fulfill several roles, it is sometimes hard to decide, which role is more important at the moment. Additionally, there is always more work to be done than what is possible within a given timeframe. There will always be tasks that I won't be able to tackle.
There are several obvious approaches to this problem (I have not researched related work so far):
Fixed time slots - I could dedicate Mondays to research, Tuesdays to teaching, etc. There are two major problems with this approach. First, it does not work for urgent, unplanned tasks. When a student sends me an e-mail on Wednesday, asking a question about the current work sheet, it would not be nice to tell them that they need to wait for a reply until next Tuesday. Sometimes, a dedicated day has to be used for other tasks, e.g. for travelling. The second problem is that most roles do not have constant time requirements. There is little teaching to be done during holidays. A lot of research and writing happens right before a conference deadline - and little directly after the deadline. Therefore, a fixed schedule is not practical.
Fixed time amounts - …
Highest Priority Only -
The problem of scheduling tasks with different priorities - and taking care that even less important tasks do not get neglected - is not unique to humans. Operating systems need to do the same. Therefore, a lot of research has gone into kernel schedulers. The Completely Fair Scheduler (CFS) in Linux delegates different tasks to CPUs and makes sure that no task starves while prioritizing tasks with a higher nice level. (taking into account CPU affinity and reducing context switches?).
Maybe the alogrithm can be adapted for a personal task manager?
> elements) and a way to put modified elements back into the slot they were originally in.
=
(todo)
Analyze Wikipedia edits and create edges between all users who have edited the same Wikipedia article. Then find out who is within X hops from a certain user, and what the distance between two users is.
Status: rough idea.
Problem: Many UNIX tools work on whole files. SFFS would allow accessing only parts of a file using a directory syntax, i.e. every file can also be a directory Note: Linux does not allow a file to be both a regular file and a directory at the same time. According to include/linux/stat.h, the st_mode flags are mutually exclusive. Maybe solve this on the filesystem path resolution level, i.e. 'file.html' will appear as a file, 'file.html/' will appear as a directory.
Example:
> cat test.ini [Section 1] val=123 [Section 2] val=333 val2="tree" > ls test.ini/ Section 1 Section 2 > cat test.ini/Section\ 1 val=123 > ls test.ini/Section# 1/ val > cat test.ini/Section\ 1/val 123 > cat test.ini/Section\ 5 cat: test.ini/Section\ 5: No such file or directory
See also: http://sysadvent.blogspot.de/2010/12/day-15-down-ls-rabbit-hole.html
For each file type a specialized filter would be needed.
based on SFFS concept
Example:
> mount http://raphael.cc /mnt/raphael.cc # establish long-lasting http connection > cd /mnt/raphael.cc > ls ./ # get all links on the page (URL or title of link? title as symlink to URL?) blog/ dates.xhtml http://heise.de/ > cat . # act as a file - provide access to raw html (and maybe xml tree as in sffs, see above) <html> <body> <a href="blog/">Blog</a> <a href="dates.xhtml>test</a> <a href="http://heise.de/">Heise</a> </body> </html> > cd http://heise.de/ # auto-mount heise.de > pwd /mnt/heise.de/ [...]
(obviously, caching would be great)
Use also as basis for e.g. wikifs, facebookfs, etc.?