Philosophy of Free Software Foundation (FSF) : Free software is a matter of freedom: people should be free to use software in all the ways that are socially useful. Software differs from material objects--such as chairs, sandwiches, and gasoline--in that it can be copied and changed much more easily. These possibilities make software as useful as it is; we believe software users should be able to make use of them.
I believe similar is true for ideas as well - People should be free to use ideas in all the ways that are socially useful. Ideas don't fall into material domain either. Users of ideas should be able to develop/implement them in their own suitable ways.
As my personal experiences have been, people at different points of times or in different locations, when faced with similar kind of situations are quite likely to arrive at similar solutions and ideas, unless a situation has infinte ways to deal with. Leave aside the work, they may not even be aware of each other's existence. Just because one is more business minded and goes for the obtaining a variation of monopoly called patenting, should that take away the rights of other to use the ideas/solutions that he independently arrived at?
Ideas presented on this page have resulted from struggles, routines, observed issues and demands of day to day life - the date mentioned against an idea only indicates when was it first keypunched on this homepage, and are dedicated to academic fraternity and the free spirit of human being and the students, colleagues and friends at Pune University Computer Science Department, who have nurtured the researcher/academician in me. These are my little contribution towards Free/Open Ideas Foundation that I hope will be formed someday, when couple of revolutionaries and Richard Stallmans come together for it.
Who am I? Am I a dreamer unfit for his times, or just another idiot on the block? Is it a gift or a curse to be able to see beyond others, to be first? Whatever it be, I am sharing these ideas, in case somebody out there feels alone! Some of these are likely to look weird and fancy wishes or possibilities out of some science fiction or ideas from neverland. Didn't someone dream of walking on mars once! In any case, technological advancements over the years have slowly been paving way for the realisation of many of these ideas. Any individual, group or organisation wishing to explore one or more of these together, is welcome to get in touch with me over it.
Engineering background often enables one to look at things from utiliy point of view and prompts to put things together to make something useful out of them. Below you will find some of these ideas regarding applications of embedded systems, design of new hardwares and utillity applications etc..
In 1994 I came across the notion of clipboard in Windows and soundblaster cards. I felt audio clipboard could be great. While working, if i remembered some idea, issue about other work etc. I could just speak into microphone and be done. Later I could search (and play) it in the list of recoreded messages, based on some (preferrably spoken) keyword that I used to tag that recording.
I was terribly slow in typing as compared to present and couple of times writing/typing an idea took sufficient time to make me forget other ideas and issues being dealt with at that moment.
Lovely were the days, when only Doordarshan was the only television channel available, not much of viewership not much of advertisements, not many channels to surf no remote contention and finger itches. With time not only the duration of commercial breaks extended to 5-10 minutes but frequency also, sometimes in half hour program we see 15 minutes of commercials. It's not an issue, except when two interesting programs are going on simultaneously on two different channels and one of them can suffer to be of lower interest for short duration, say a half hour animation program and cricket match. Manually keep-checking the other channel for it's commercial break to get over disturbs the rhythm, leaving aside pain to fingers in this process. Even picture-in-picture television models are not satisfactory. As a television viewer we would like to enjoy full screen viewing of current program.
I wondered if we could automate channel swapping during commercial breaks b/w main channel and auxillary channel (for ex. during cartoon show the cartoon channel becomes main and cricket one becomes auxillary) and maximise viewer pleasures. Some other viewer might want this basic idea extended to one main and N auxillaries to switch to any one of the auxillary channels that is not running commercials at that moment.
This feature could be made available either as an intergrated part of tv hardware or via an addon/slipin hardware unit. Though a good start point will be to provide it via tv-tuner card h/w and/or s/w first. Main issue here is to identify the beginning and end of commercial break - should it be only audio pattern, video pattern or both, possibly combined with some user intervention to allow system to learn to identify the boundaries?
I had often discussed this idea with my brother in past. In october 2003, after returning from Deepawali break, I happened to share the idea with a close friend in the organisation I was working for then, who suggested me to talk about it with the CEO. CEO advised me to draft a mail with the idea and send to CTO and couple of other people. I did learn some interesting things about life in this process, though the idea didn't appeal to them.
Towards the end of 2005, when I look back, I find commercials issues even worse. Pleasure is slowly being taken out of viewing. But commercials are not spams; sometimes tv ads are more worth watching than the tv program itself. There used to be time of pop-up and floating ads that interfered with internet surfing. To attract users, browsers evolved with pop-up blocking capabilities. But did that hamper (advertisement) industry? Things have slowly changed to less intruding text based advertisements on internet pages these days, that don't seem to take pleasure out of browsing. I hope things will improve in case of tv commercials as well.
Around beginning of 2001, I had an opportunity to work on a pop client for a low power handheld device, that didn't reach the agreement stage because of IT slump. However whatever I had thought about the related problem of browsing mails on low end mobile devices, might still be useful.
It is not profitable to download full mail content in the beginning because -
Wondering about how will we reply to mails from mobile device? Just simple distribution of information b/w client and server, and maintaining some infomation to put the pieces together. ;)
To meet these basic requirements we will need to add some negotiations and command/control information exchanges to existing pop/imap clients and servers or can develop separate protocol for mail exchange in case of mobile devices.
Different documents and non-document files have got certain structure associated with them. Even the contents of a text file could be structured. There have been occasions when I had wanted to diff between two (doc, pdf, html etc.) document files for contents but couldn't find satisfactory tools for the purpose. There have been other occasions when I had wanted to diff between two text files but existing diff programs were of no use, as it had been column(s) of difference, either location wise or based on structure of line that entire file followed.
It would be useful to have diffing tools that are contents sensitive or rather can be made contents sensitive via some specifications related to contents and it's structure, thus making it possible to have generic tools that can learn and work with any kind of input. It would be a value addition to various editors and viewers, if they can also support contents sensitive diffing for the information they can handle.
Next step after contents sensitive diffing will be semantic diffing. Does it look a far fetched dream?
When I joined PUCSD in 1995 as fellow, I was interested in research in OSes, primarily adaptive and fault tolerant. As time passed, got more and more into teaching and administration and then it became long back. Around 1996-97, following ideas had caught my fancy.
Later when I came to know of Cradle's UMS architecture in beginning of 2001, materialisation of that fancy idea looked possible in closer future. As I knew more about UMS architecture in coming months, I envisioned more powerful desktops based around this kind of architecture. Also, IMO the power requirements didn't make it suitable for embedded devices as much.
Further to that, when working at Codito on VoIP related project, I saw application of that fancy idea in improving the performance of VoIP applications, provided network bandwidth not being bottleneck, if standard sound and network cards could be as per the idea and programmed to offload relevant protocol processing from main processor.
Traffic control and streamlining is an issue with increasing no. of vehicles day by day. Had come across a small news clip in local paper mentioning about some company trying to get automated traffic control to certain Indian cities. It was reported to be in use abroad. Not much information was available about the company and it's work in that article. However that article motivated me to look at this problem that had been in front of me daily, and make observations. Based on my experiences with city traffic during daily up-down to workplace over I had suggested possibilities of work in this area to some contemporary colleagues in January 2002. However that didn't interest them.
In July 2004, happened to visit Siemens via an email on eCos mailing list, and found that they have done quite some work in automated traffic control. It resulted in both happy and sad feelings. However on browsing available information on their site, it seemed that some of my ideas were still not tried by them. I still see scope for indigenous work in applying embedded systems to solve traffic problems in India. Govt. agencies might want to buy solutions from foreign countries/companies for various reasons of their own, but that's a different issue altogether.
Improved navigation is partly related to traffic control, but mainly focusses at providing cheaper indigenous solutions for helping a person find the destination in a city without the need to ask passersby or shopkeeepers. Often we come across people/ourselves asking for directions to a certain place in a new area. Not all the times you meet people who can tell you the way. In day time you can ask people, what about during night, when even shops are closed? Interested in knowing my visions of future that I first had in January 2002? No idea about India, but in countries like Japan, USA it could still be possible in coming years.
I believe that AI and robotics would have evolved sufficient enough even before 2050 that we pass by a robot and miss it for a human or other biological lifeform. Terminator kind of robots with fast skin regeneration could be reality as seems from the research work of Tejal Desai on generation of cells by trapping necessary chemicals in nanospace.
By then, there would also be viruses infecting robots. Airborne - caught by wireless receptors, spreading by touch - transferred by static electricity patches on surface, as evolved robots would have sensors spread around their body surface, and so on. Directions like Human Area Networking and Personal Area Networks don't leave the things too far fetched. However the idea of "a piece of code moving around as a form of energy" might still be quite difficult to digest for many of you.
Teaching Distributed Operating Systems elective at PUCSD in 1999, introduced me to the idea of distributed filesystem. That had propelled me then to think about a canopy filesystem on a system that is not part of any cluster and it didn't matter whether it was single user or multiuser.
Different filesystems are just various data organisation schemes, data being variable length files and we primarily look for optimum space utilisation, and creation, retrieval and updation efficiencies. Different schemes could be possible that are extremely good for specific kind of files related requirement. Requirements could vary from fast creation and deletion of temporary files during compilation process, weekly backup, a code file being updated, just to put a few.
Canopy filesystem would consist of various file organisation schemes laid on different parts of storage device, and it will be the only file system user will see during work. Each of the file will have some more attributes indicating the requirements it has from underlying system, kind of operations expected on it etc.. As much as needed, tools will be made aware of new file attributes. Based on appropriate file attribute(s) canopy filesystem will associate file with corresponding component file organisation for optimal overall system performance and user satisfaction, and all this will happen transparent to the user.
Can it be possible that a piece of data is not stored on any storage medium and internet is used as storage medium for it. It has multiple copies for redundancy and is s always in transit. Hackers know how to insert it and retrieve it. It is lost only when entire internet shuts down.
A very old dream since student life. Often I have heard about Sanskrit being quite suitable language for computers but it is more than 10 years atleast since then, where is a computer that I can program in Sanskrit based computer language, see an operating system booting with messages in Sanskrit. Lot of work has already been happening in this direction, thanks to the initiatives taken up by CDAC and NCST but still lot needs to be done - for example, look at the web pages in local Indian languages. Availability of free tools and OSes like GNU tools, Linux can be useful in the beginning to develop proof-of-concept solutions, but need still remains to comeup with fresh and intuitive ways for the purpose.
I have come across people who are quite good but for the barrier of English language. I am quite sure when we develop computer programming languages and other development tools based on local Indian languages, create a development environment for user in local language, we will be able to tap the vast pool of talent, waiting to be discovered. Indian languages don't lack the capability to express logic required for writing computer programs, slave thinking does.
Though I had known about WayBack Machine for past couple of years, I didn't look at it closely till I taught Database Management Systems in 2005.
Considering the volume of data they handle, it will be useful to have a data compression solution (may be bzip2 or likes modified) that could not only produce higher degree of compression but also generate separate meta information about compressed files and directories, that could be used for faster data retrieval from the archive.
Considering that the compressed file consists of blocks of information that can be decompressed almost independently without going through the decompression since beginning of the file, this meta information would consist of information like - name of file/directory and it's extent that can be expressed in couple of ways like "(start block, start offset in uncompressed block contents) and (end block, end offset in uncompressed block contents)" or "(start block, start offset in uncompressed block contents) and size". Better still, if decompression could work at individual file level w/o compromising on compression ratio.
There were some data organisation related ideas as well, that I will put someday after verifying against whatever I had downloaded from Internet Archive site.
As a normal user I feel that tar can be improved to provide faster
listing and extraction of selected files and/or directories in the archive,
if following features can be incorporated in it. May be there are reasons
related to compression options, that these are not available.
If we could extend the existing filesystems such that the contents of a file is transparently decoupled into header and body content when stored, not only will it reduce the load on server side in serving the user requests aimed at reducing internet traffic, but will also aid in faster encryption and decryption of filesystem. In many cases you can't make any sense of a bit pattern, if the corresponding header information is not correct or not available. In these cases we can achieve faster encryption/decrypion of filesystem by only encrypting the relevant header information.
If you have worked with poor internet connection and/or limited browsing and download quota per month, and/or you have been paying for internet usage out of your own pocket, you might share the feelings behind some of these ideas. In my opinion faster machines and better connectivity shouldn't be used as excuses for not looking into increasing the actual content ratio in internet traffic.
This idea goes back to the days in late 1990s when bus topology was used to network the computers in PUCSD laboratory and we often got chance to practice binary search using bus terminators. ;)
When I was teaching Distributed Operating Systems and Networks courses there in 1999, I visited the topic of Remote Procedure Call closely and it generated the idea of Remote Command Call (RCC) aimed at reducing the network load in a setup based on remote mounted directories trees and large userbase, by reducing the amount of transferred data and/or number of data exchanges between client and server. As a side effect, the network related processing on both ends reduces, but part of command related client processing also gets shifted to server end.
Consider some example operations below, on a directory tree having parts
mounted from remote computers, where RCC will be beneficial.
On a cursory glance the RCC can easily be handled by updating the shells. Some day I will also put up here the details/issues that I had worked out related to RCC long back.
This idea was offered as one of the Networks course projects at PUCSD in 1999. Experiences with the bus topology and usage pattern in computer laboratory there, had been the motivating factors.
Consider the case when we telnet/rlogin/ssh from computer A -> B -> C -> D and work on D while sitting at computer A, and the computers involved in this chain are on same local network. It impacts the response time for applications being interacted with on D and also results in increased network traffic, not to forget the extra processing at intermediate computers in chain.
These problems can be taken care of, if we could identify this chain and short-circuit A->D in transparent manner for the duration of effective session of A with D. This will also cut down the impact of heavily loaded intermediate computers on this session. At cursory glance, a solution will require housekeeping at telnetd/rlogind/sshd level or at even lower layers or have separate daemons or separate code path in networking stack for the purpose, and addressing the issues involving a node reboot, connection break, proper/erroneous termination of session, suspension of outgoing session on intermediate node etc.. For example, in quoted case logout on D should establish session A->C i.e. move the session endpoint to previous node unless the previous node is start node itself.
The idea is applicable to any length of connection chains and can easily be adapted to situations where part of the chain is outside local network - short-circuiting can be applied to part of the chain (length > 1) within local network in this case.
Since I knew about Intel hardwares with more than one processors in middle 1990s, I wondered if it could be possible that each processor could run a different OS at the same time and also the possibility of efficient inter-OS communication. Running an OS inside virtual machine application, is not what is meant here.
Should be possible if each of the device/card is made intelligent to multiplex/de-multiplex requests/responses from/to OSes, present independent and protected views of resources, each OS given some share of memory to work with etc.
If you watch Hindi movies, often you will find songs inserted for the sake of it. Sometimes some of these movies can be watched if these could be played in flow w/o any song coming in between.
What updations will it require in VCD, DVD etc. storage formats and movie player softwares, so that a user just selects his choice of view - with or without songs, in beginning and enjoys the movie w/o any break/intervention? It will further add to the pleasure if user could preview songs with context and choose which of these he would want in his viewing of the movie.
Computer games and viruses had been my major attractions to computer, before I studied computer science. First time I got to know Nethack in 1994, I wondered if this game could be enhanced with good animation? And giving user flexibility to specify his own levels, characters, dungeon layout via some specification file(s) to enhance the game. Roughly speacking, could it be possible to generate good quality animation objects and animated scene on the fly, based on some specification file(s)/ some specification language?
Ever had a situation when you were watching a programme on one channel and another channel was showing an equally interesting programme simultaneously. These days channels often compete this way. You would want to enjoy both of the programmes. Won't it be quite useful, if you could direct the set top box or tv tuner hardware/software to save other programme(s) for you in background while you were watching one of the channels, so that you could enjoy the saved ones at a later period of time.
Ever noticed, the wide quality range of voice box and how much quality change human ears can withstand before understanding things incorrectly in non-electronic voice communications. You can make out the speaker's voice or what he is saying even when he has sore throat or blocked nose. And there are times when we get fooled by a voice imitation, though with bit of careful listening and practice we can catch even quite close imitations. What's lying underneath all this? Everyone has a voice signature, that our ears and mind interpret to identify the person. For that matter every word that we speak, also has a signature and that's what helps us make out what the person with blocked nose and/or sore throat is trying to speak.
What is this voice/word signature? Just some weighted combination of specific frequencies with specific amplitudes, or something else? The frequencies and their magnitudes needed to produce the sound of a vowel, a consonants - are race and region independent. Signature starts differing and we have problem in making out what is being spoken, ofcourse the granulariy varies from person to person. True, that doing a fine analysis closely comparable with human ear and brain to would require finer measurement and analysis instruments. But we can still start in the direction of finding, rather first understanding, the signatures of common sounds, vowels and consonants and pave way for humans imitating robots of the future.
This is an obsolete idea in the sense that it deals with obsolete hardware. Around 1995 PUCSD laboratory had lot of 8088/8086 and 80286 machines with 1MB of memory and hercules mono and ega displays, most of them were used as diskless clients to connect to the novell server and then via telnet connect to Unix server. I often wondered if we could have some kind of X windowing layer on these machines that could -
When we look at an area from distance, we see biggish landmarks and objects. Other details are not so distinguishable, sometimes not even noticeable till we get bit closer. How can all this be translated to databases organisation, processing, operations and applications? I have been struggling since september 2005 to clear the hazy picture.
Will it translate into granularity of query? What/how much data is visible for query processing in database will depend on how coarse or fine the query level is? Will it require some weights or levels associated with data? Whether these weights be static or dynamically determined by database? How will it impact the design and implementation of databases? One thing to note here is that we don't want/get false information, no matter what the granularity of query is. Also, query processing should be faster at a coarser level.
Possible applications of zoomable databases would be the situations where one could be interested in getting in vicinity of needed information, to various degrees. Query would remain same, but by changing it's granularity one could get more or more precise information.
There is life beyond computers. There are streams other than computers like mechanical, civil, chemical, environment science, agriculture, archaeology. For a change, one can dream fruits and vegetables, pollution and sound too.
Sometimes in november 2005, looking at a computer speaker, I wondered - what if this speaker consisted of many tiny speaker arranged in a grid, each speaker of the size of a tiny LED? Would it result in effectively a more powerful speaker at lesser costs? What will be the output audio quality? Could it be possible to have different LED speakers playing different set of frequencies and have them suitably arranged as per listener's ears, to have an altogether different kind of sharp and clear audio quality, bass and treble control, an altogether different kind of stereo/surround sound effects? Will such kind of compound speaker system add to the pleasant feelings of music and be easy on ears dissipating energy more uniformly, as compared to existing big speakers? Will it require some different kind of material/hardware/technology to make such systems possible? Many more questions, many more possibilities!
Could it be possible that vibrations from all the noise and vehicular movements, heat and gas pollution could be used to feed devices alongside and below the roads to generate electricity, and store it, that could be used to light direction markers, traffic signals etc. ?
In 1991, while looking for a final year project, I stepped upon this idea that could have had some commercial value. Just imagine, you take the dried fruit slice out of the package, soak it in water for a while and you have the slice as if cut few minutes back. Work involved finding the optimum sizes/shapes of fruit slices and suitable dehydration techniques that resulted in minimum vitamins loss in the process. Issues of packaging, need of flavour additions and longer shelf life of packaged product were also part of the work. Whether it was feasible for every kind of fruit, processing at what stage of ripening, same processing possible for every fruit - were few of the questions also to be addressed.
All that for a bachelor's degree project? Plan then was to continue with higher studies in agricultural engineering as M.Tech and may be later as PhD student, and take the idea to completion. Even though preliminary research had been conducted for couple of months and various requirements for carrying out the needed experiments were identified, for some unfortunate reasons the project was changed to "optimisation of water evaporation and sugar addition for papaya preservation".
I haven't tried many mobile handsets but two of Nokia models. Switched from earlier simple model as it didn't have dictionary support, so typing messages was not only quite time consuming but also more pain to fingers, but otherwise it was quite good enough. I would not be surprised if some of the features enhancements suggested below are already taken care of in newer versions of Nokia or other handsets by now.
Following have been my experiences with Nokia 3350 handset. Fixing these issues should not be a big task.
In my view, handset features should be customized regionwise. For ex., for India region, it doesn't make much sense to provide support for middle east and eastern countries' languages and chinese calendar. Recovered memory could be used for providing more message/phone-number storage and/or some other features useful for the region.
In early 1999, I was looking for a click-n-shoot camera with easy reel loading/winding features and bit more control on clicking - a camera that could be operated easily by anyone in family and also had some scope for experimentations. That's how I got in touch with Kodak KE-50.
Normally we take out batteries from camera, to avoid any battery leakage, when we don't intend to use it for couple of months. This Kodak model does not deal with this situation well - each time you put in the batteries, it considers that reel has been inserted afresh and advances it by fixed number of frames. May be the designers assumed that user will finish the reel soon after loading it. Luckily, acquiring this insight did not cost many reels. IMO fixing this issue in camera firmware should not be a big task.
This model also needs significant impovement on flashlight aspects as well. Flash bulb is on one side so it is bound to have some effect on the lighting of scene being captured, but user does not expect as bad as around 20% of frame with bad flash lighting. It does not seem to give user value for his money. Perhaps slightly-at-degree flash bulb unit, and/or a varying thickness/curvature glass cover could help in getting uniform lighting for the captured scene - atleast whatever fits in rectangle marked by corners.