Monday, May 12, 2008

Spring/Webwork/Tomcat/Maven?

Personally, I found that using the Spring/Webwork/Maven combination to be a pretty sweet set of frameworks to develop for. There was definitely a huge learning curve (see misc. project presentations on others experiences), but once you get past that, they provide great flexibility.

Maven provides a lot of heavy lifting capabilities... It lets you take this one file, pom.xml, and via its magic, all external code required to build and deploy your small module is automatically lifted from the Gnizr servers, your code is automatically compiled, and any files you generate are overlaid on top of the delivered code and automagically built into a war file (the standard for deploying web services). And I know from personal experience, manually compiling Java stuff/Tomcat stuff and trying to build a war or jar archive can be pretty daunting if you are trying to do things manually.

Webwork gives us a nice Model-View-Controller implementation to work with, supporting different languages for the view aspect including JSP and FreeMarker Templates, and tying into Spring for dealing with Singleton/DAO objects and friends.

Spring makes it nice and easy to pass common objects into other objects, and control initialization of different things magically via XML, and with Maven makes it easy to extend the existing spring xml w/ new xml.

For a simple project, Spring+Webwork would probably be overkill, but for a system designed to evolve over time and be used and edited by many users, I think it (or something similar) would be essential for organizing things in a sane manner. Maven on the other hand would most likely be useful in projects of any size.

Subversion: Necessity or Annoyance?

Personally, I find a good version control system to be a core necessity to every major programming/scripting project (major being lets say either > 5 source files or >= 2 people working on project). When used properly, they make it trivial to track all changes to the codebase, why that change was made, and allow specific changes to be reverted if an earlier version of the code is needed.

Also, when more than 1 person is working on the project on separate accounts and/or computer systems, it allows users to make changes to the code, commit those changes and have those changes show up on others copies of the code (without having to ever manually worry about manually merging changes together). And in complex projects, even if you are the only one working on the project, having the ability to revert changes is key (also automatically having your code backed up on Google's servers is definitely an advantage :-).

So... Agree? Disagree? Unsure? DISCUSS!!!

Sunday, May 11, 2008

Web 2.0 at the Department of Defense

http://www.pcworld.com/article/id,129328-c,internetnetworking/article.html

The Department of Defense's Defense Intelligence Agency has been attempting to use Web2.0 technologies, starting with a wiki, since 2004. They are attempting to leverage wikis, mashups, blogs and RSS feeds to assist their analysts.

This is a departure from the standard use of technology in government where upgrading hardware and software could go for years and the latest and greatest web technologies are shunned for both security reasons as well as misunderstanding of their uses.

While we won't see these technologies in all levels of government for valid reasons, in certain aspects of the government we could see these technologies used more and more.

Add some Wesabe to your Mint

http://www.boingboing.net/2008/04/24/wesabes-new-reccomen.html

Wesabe takes anonymized purchase data to manage your finances better and has recently launched a system to help you purchase the same items at other locations. The system seems like it would be a nice mix to go along with Mint.com in an overall financial strategy.

Of course when the system says I should buy at Walmart instead of Target, I'll have to refuse that advice. The next step in these financial systems should start monitoring the social and green aspects of the places you are shopping.

Semantic Hacker

http://www.semantichacker.com/

Semantic Hacker has an open API for semantic discovery and is running a million dollar challenge to use their system in new and interesting ways. The system seems to pull out semantic data from any text source. There is an example on their site where you can paste text and get out the data. It seems mostly like a keyboard analysis system. It would be interesting to have someone comment on the system as it stands.

"Develop a software prototype, business plan or both with commercial viability that is focused on a vertical market. Solutions in finance, healthcare or pharmaceuticals might be good places to start."

SocialDevCampEast - Report from the field

Or day after the field.

https://barcamp.pbwiki.com/SocialDevCampEast

If you have ever been to an IEEE or such conference, you know that everything is planned ahead of time and there are committees for everything. It works well, but for a smaller situation like the Social Dev Camp East that happened yesterday, the power of Web2.0 technologies came together to promote a smooth flow throughout the day.

Take a look at the wiki link above. There is a proposed schedule. At a regular conference, that would BE the schedule. At the camp, there weren't even defined sessions before the campers voted on them in the morning. You can see the proposed time schedule got pushed back as people probably showed up a little later than expected. The actual schedule exists on the page along the proposed, showing the differences. Updated in almost real time, this is a huge break from the monolithic existence of regular conferences. It was encouraging and impressive that such an open source type of event flowed so smoothly.

The actual content of the day, and the overall purpose as I saw it was interesting. At a regular conference, the sessions might go for two hours each, starting at 8am and ending at 5pm. This camp has one hour sessions ending about 4pm. Each session was about an hour long on one topic. Since the topics were decided on in the morning, the presenters varied in their preparation. Some were ready to go with organized presentations, others were thrown together according to the requests of the campers.

Two sessions in the mornings with discussions during led to a discussion filled lunch followed by another two sessions in the afternoon. Each session had four different topics in individual rooms for a total of sixteen topics throughout the day. Following the sessions there was a camp sponsored open bar down the street at The Brewer's Art. After sixteen different topics were covered, people from D.C, Baltimore, Philly, NYC and other places with all kinds of technical backgrounds, made for plenty of conversation for hours after the official sessions were over.

In CMSC 491S/691S we had interesting lectures that led to discussions that had to be halted as the end of class came. Imagine sixteen lectures in one day and hours of discussions afterwards. Contacts, friends and business opportunities were made and the results of the camp will echo for quite some time.

Another camp will be held in Fall 2008. Baltimore, New York, or someplace else hasn't been announced but it will probably be bigger and better than this one.

Tuesday, May 06, 2008

BarCamp East Saturday May 10

I heard (via an old-school radio broadcast- Mario Armstrong's Digital Café short program) about an interesting SocialDevCampEast event this coming Saturday, May 10 (8:30 am - 1o:00 pm) at the University of Baltimore. This is described as "an Unconference for Thought Leaders of the Future Social Web". It would appear that over 160 participants have signed up in advance, including UMBC folks (UMBC is a sponsor). This is one in a series of BarCamp workshop conferences distinguished as low-overhead, self-organized, user-generated, open participation style events. The event site lists a number of diverse proposed sessions, many of which should be of interest to our class members. Frankly, I'm shocked I've not heard about this event via class or our blog! (Wish I could attend, but it's not to be).

Monday, May 05, 2008

I Need Information…

Whenever you need information to help you make a decision, wherever you are, that is an opportunity for and computing technologies and the Internet to help provide the answers. This basic premise underlies much of the computing revolution over the past few decades, although in recent years much emphasis has shifted toward the wherever you are aspect, as students and business professionals embrace mobile Internet-enabled devices. What sort of information are we talking about? Of course there’s plenty of well-structured “data” like information to be had- everything from stock quotes, product prices and information, sports scores, weather, and descriptions about people, places and things in the world, to name a few examples. As the relative cost of storage and computing has plummeted, rich multimedia content is now commonplace- images, audio, and now videos are ubiquitous on the Internet. But our most recent Internet developments have encouraged broad-based publication of more information of the “knowledge” sort. As a society we are increasingly valuing news, blogs, messages between friends, about people and events, and social commentary that the masses themselves are generating. We refer to this as the long-tail effect in media production [1]. I was struck by the statistic Akshay Java cited earlier this week- that publicly generated text content dominates professionally edited text content by a factor of four or five to one [2].

Today and tomorrow’s Internet capabilities are truly dependent on wireless communication technologies and inexpensive electronics- those of us in the software business should not forget that. Computer electronics products allow us to consume all kinds of information in a multitude of forms to suit our lifestyles- some of us sit at our computers reading e-mail, web pages, blogs and videos; others listen or watch streaming multimedia on their home entertainment systems; still others listen to podcasts portable audio players; and the mobile computing crowd does all of this and more on their computer-enabled phones (or are they cell phone-enabled computers?). In the next five years we will see all of these modalities grow in popularity- as home computers blend in with entertainment systems to support diverse sources of streaming audio and video (TiVo, Sling, Roku, Apple TV), and we take the most miniaturized forms of these electronic capabilities with us everywhere (even on highly active excursions like hiking, climbing, running, boating & swimming!) [3]. Mobile devices will be made tremendously more effective and convenient with Internet sites, services and applications geared toward convenience- beyond local content (traffic, shopping, weather, news, and events) and multimedia playback to include diverse communications- phone, messaging and even social networking [4,5].

When looking five years ahead, however, the most notable theme I anticipate is a simple twist on the long-standing premise that I began with: Whenever and wherever you would like to produce information, that is an opportunity for and computing technologies and particularly the Internet to help provide the means. We’ll be doing more than taking some geo-referenced pictures and uploading them- we’re talking about providing the context and commentary; micro-blogging on a massive and distributed scale; crowd-sourced coverage of live events; and even organizing flash mobs to create the events [6]. We will not just be consuming the information out there- we will be interacting with the world, and each other, and literally making the news.

I need information… and I need to produce information!

References:

[1] Akshay Java, class lecture on 2008-04-30 http://socialmedia.typepad.com/blog/files/socialmedia.pdf

[2] “The Long Tail” described on Wikipedia http://en.wikipedia.org/wiki/The_Long_Tail

[3] Travis Hudson “All Nike Shoes to Become Nike+ Compatible”, article in Gizmodo 2007-03-26 http://gizmodo.com/gadgets/portable-media/all-nike-shoes-to-become-nike%252B-compatible-247097.php

[4] Ellen Uzelac, “Mobile Travelers: Wireless devices, such as GPS units and cell phones, are transforming the way we vacation”, article in Baltimore Sun 2008-05-04
http://www.baltimoresun.com/travel/bal-tr.techtravel04may04,0,7453953.story

[5] "Mobile Social Network" described on Wikipedia http://en.wikipedia.org/wiki/Mobile_social_network

[6] Madison Park, “At harbor, 80s-tinged flash”, article in Baltimore Sun 2008-05-04 http://www.baltimoresun.com/news/local/bal-md.rickroll04may04,0,4649727.story

UMBC 2013

I’m envious of you kids starting out now. Back in 2009 when I started at UMBC I got my new quad core MacBook running Leopard. Of course Lynx and Cougar have come out since then, and they run fine on my machine. A little slow running iLife ’12 but that is to be expected with the haptics they are running through the new iPods


Of course it doesn’t really matter how fast iLife is, I still get along fine online. That is where everything is these days. I opted for a 500GB drive when I got my machine, but it is barely full, four years later. Everything is online. I use Google Documents for the few text docs I need to exchange with some backwater friends and keep everything else in my personal cloud out in Rwanda. Lack of natural resources, a growing population and increased prices from India and China, drove outsourcing to new locations. Technically, my data isn’t in Rwanda, it is all over the world. Natural disasters aren’t a thing of the past, but it is maintained by les Rwandais. It works fine as any in Bangladesh or Peru that my friends use. Enough about my extracurricular activities and me.


After Blackboard lost their patent suit while I was in high school, a swarm of open source systems sprang up to devour the previously off limits IP. UMBC switched over to one of them a year or so ago and it has been great. Built on the Mozilla Facebook platform, it has enabled collaborative wet dreams that professors trying to prep us for the real world could have only dreamed of when I was in junior high. Hell, Facebook barely existed back then. They had millions of users, sure but the interface was so simple. I looked you guys up before the tour and good job, you’ve learned how to use the fine grained access control, you’ll appreciate that in four years. Social graphing and reputation markets were just topics of research, but you use them everyday.


You may have heard of professors turning off the Internet back in 2008, that doesn’t happen at UMBC, it is used throughout the disciplines. Now that the semantic web is more reality than pipe dream, the web is more useful than ever. That required system they are quoting down at OIT and the bookstore is just a minimum, you might want to get something a little more powerful depending on your major. Right, you’ll want something more powerful whatever you are doing. Granted, much your system will exist online, but much of the rendering is done locally, so get an Intelvidia or AMD-ATI card in whatever you get. School is what you learn, but it is also who you network with, now more than ever in this connected world.


The world may be a little hotter, and gas a lot more expensive, but it sure seems like web technologies have helped make it a better place. Whatever you do here, you will be connected to everyone else on campus and around the world. No longer hindered by travel requirements, you may get a guest lecture from a Swedish designer on an all night design treatise or an explanation of biochemistry from Vietnam. They don’t call it an Honors University for nothing.


Thank you for visiting UMBC today. I know being outside in the big blue box on an August day in 2013 isn’t exactly your idea of fun, but it is good once in a while. For those using the iRobot telepresence units, please send them back to the Commons before you logout or you will be charged extra. School is a place you learn, not a place you are, but you might want to get on campus once in a while. You have such opportunity ahead of you and I think you have made a good decision choosing to go to UMBC. Class of 2018, my caps lock off for you! Sorry, that was a very very lame joke that I know only two of you understood.

The Web in 2013

In 2013, the web will be fairly similar to the one we all know and love today. There will be several key differences however.

The first and most obvious will be the adoption of XHTML 1.0 Transitional as a standard. Some web sites will be using pure XHTML, but the mainstream will be stuck on 1.0 Transitional due to the high percentage of users still running Windows XP and Internet Explorer 8.0 (the last supported version of IE on XP) with its horrific XHTML Strict rendering.

Another key difference will be the proliferation of high-resolution video advertisements. With 50Mbit fibre-to-the-home being standard, and 100Mbit available in some startup markets, high-definition video ads will have all but replaced the static and low-res animated graphics of 2008. Upon visiting sites, users will be bombarded with motion, forcing them to click on the ads or risk being sent into epileptic seizure.

Instead of the traditional Flash, video ads will be streamed out in standards-based MPEG-5 which will become the ISO standard for compressed 2160p High-Definition content. Thanks to the standards-based codecs used, playback in browsers will be accomplished via built-in code, no special plug-in will be required to view the embedded videos.

By 2013, frames and tables as layout crutches will have been all but eliminated from modern web sites. Instead, well-placed div tags will denote content while CSS scripts will tie everything together, effectively separating content from layout once and for all. All still using XHTML 1.0 Transitional however...

Instead of writing code by hand or using current craptastic programs that generate unreadable code, web developers will use a free open source toolkit for WYSIWYG development that generates completely readable XHTML and CSS code (including ECMAScript glue code that automagically works around known browser bugs/deficiencies).

These tools will use advanced NLP algorithms to automatically add semantic attribute data to pages. Users can manually tweak this attribute data (which will be represented as RDF embedded in XHTML), but for the most part the automatic processing will greatly increase the searchability and indexability of all web documents by providing standard semantic attributes which can be used by both search engines, mashup engines and query services.

As for the question of whether or not the web will be "better" than it is now... Things will be much more standards-compliant, although not necessarily compliant to the standards of 2013, but by todays standards an incredible improvement. In fact browsers of 2013 will block and refuse to render any pages which do not validate due to unspecified behavior. Because advertisements are more graphical, they tend to be more distracting than what we deal with today, but those same technologies allow for advanced interfaces with fluid 3-D effects and transitions making everything feel much more interactive.

But when it comes to actual content, people will be using this new enhanced web in much the same way as we use ours now, it will simply be flashier, interfaces will be more animated and fluid, and it will be much less bandwidth-efficient.

References:

http://en.wikipedia.org/wiki/XHTML

http://www.theregister.co.uk/2008/01/24/h20_sewer_rollout/

http://www.devarticles.com/c/a/Web-Style-Sheets/DIV-Based-Layout-with-CSS/

http://en.wikipedia.org/wiki/ECMAScript

Slow Growth in the Right Direction

Growth, Slow and Painful

HTML 5 is not set to finished for 10 to 15 more years. It will provide many needed upgrades. This includes a much desired concept: HTML5 doesn’t just define how valid documents are to be parsed, it also defines how parsing should work if documents are invalid, ill-formed, and broken, so that browser vendors can make their products fully interoperable with each other.

But that's 10 year away. In "web time," that is an eternity. So what do we do in the mean time? HTML 5 (or some other well, thought out solution) will be a new standard for developers and designers to use. Proper use of this new standard will allow for more semantic, consistent web pages.

The reason this sounds like a dream is most of the content on the web adhears to nothing. Perhaps, we should fill the next 5 to 10 years with this: teaching standards and CSS. Before semantic web, there's got to be a better mark up language. Before the better mark up language, people have to even get the point of standards. Part of those standards include seperating content from presentation, thus the the need for CSS.

None of this change we want is going to happen over night. As quickly as trends come and go on the web, the languages that it is written change at a much slower rate. There is a hodpoge of HTML version in use. Despite the fact that CSS 2 was released in 1997, many web developers and designs do not know or do not use CSS despite its obvious benefits. CSS adoption has been hinder by the very same thing as HTML has: cross-browser inconsistencies.

So when we push to teach standards to ourselves and to others, perhaps we should also include the browser programmers in this. Perhaps a half step in the progression is to put enough pressure on browser developers to provide similar and consistent output.

Where does that leave our grand plan?

  1. Consistent browser output
  2. Acceptance and practice of standards
  3. Thought-out, more semantic markup
  4. The web as it should be
    • semantic data and content
    • separate content and presentation mark-up
    • robust page renderings and visualizations

To move foward, we need to hit each of these points square on the head. We need walk before we run. If you want use the popular wild west example, we need to bring law and order before we can advance. So what would this all look like?

I would love to something like this: (take from A List Apart)
With this, there should separate file for styling the output. Inside the layout blocks, should be semantically defined (I leave this how to the semantic researchers) content and data. A more semantic DOM would be probably help.

These features fully enabled and in practice, I can see search, api's, and data mining becoming far more practical and powerful. Advertising will continue to blossom; if it is easier for machine to understand a page, then the software can present better target ads.

Speakng of ads and money, the last and thing that would help all this is money. Search, advertisements, and business need to be able to see the clear benefits from these efforts. Before a company would ponder why it would need a website, now most companies launch a website at their conception and wouldn't do business with out one. Perhaps, one day, we could convince companies that they shouldn't launch a new venture without first creating a standards-based, accessible, semantic web site.

I look forward the to web of the future.

(Assignment #5) The shift from entity to tool

How will the web look in 5 years? There are two ways to approach this question: one is to look at what the newest technologies are, and then pick out which will live and which will die. The other approach is to identify a major trend. I will take the latter approach, partially because I just don't use the internet often enough to have a good idea of what the newest technologies are, and partially because I like abstract theory much more than practice.

First, let us recall how not the web, but computers, looked not five, but fifteen years ago. That old old time when people were using DOS and when GUIs were a new thing. Think of the way people interacted with a computer back then -- through a command line interface -- the user issues a command and the computer executes it. Interaction with a computer was like a conversation. But then came a fancy Windows GUI and everything was done by clicking on buttons. Now the computer isn't the other side of a conversation, but a place to put the tools you use in. Those tools being applications used to manipulate documents, files, and their subparts. So the CLI to GUI shift is from interaction with "the computer" to interaction with what it manipulates, data. I am not saying that one kind of interaction is better than another, but merely that what the user interacts with is different. But since the average user would rather directly manipulate objects rather than have a civilized conversation with an automaton, the GUI interface won in popularity.

Now look back at the shift from Web 1.0 to Web 2.0: In Web 1.0 we had static web pages which were reached by typing a URL in the browser. So we have the user request a page, and the browser retrieves it, and that's it (similar to the CLI interaction, isn't it?) In today's Web 2.0 we have shifted our focus, again, to the content on the web, as opposed to the medium through which it is delivered. What I mean by this is that YouTube is used to retrieve a video, Facebook is used to interact with a person, Wikipedia is used to get information, etc. This shift to viewing websites as tools becomes more apparent when APIs rather than the websites themselves are used, or when mashups are created.

But this shift is not yet complete, which becomes apparent during the interaction with these websites. When someone goes to Google or Ask or YouTube or Wikipedia the first thing they do is enter search terms, and then they get their data. Even when we talk about searching the internet we sometimes say "Let me ask Wikipedia." "Wikipedia," clearly, is an entity when we look at it like that, and the next step would be to make the search engine's presence less apparent. Today's browsers are already on their way to make searches more transparent, if you enter search terms into Firefox's URL bar, for example, it will sometimes guess and immediately call Google's "I'm feeling lucky", or sometimes it will send you to the search results. Plugins and programs like Mash Maker go another step and truly act as tools that manipulate the data on the web.

Another way in which the web is becoming a tool rather than an entity is through desktop applications that use the web without explicitly invoking the browser. Yes, these have been present throughout the history of the web, but with higher bandwidths and wireless internet in more and more places they are becoming much more usable and much more popular.

So in 2013 our interaction with the web will be a lot more transparent, and more intimately integrated with the desktop experience.

The Future of The Web

Five years ago, the web was a static environment by and large.  There wasn't the data sharing, and at that, data-focused web applications of today.  We were starting to see the emersion of online retail, which pushed a lot of the technologies that we see today.  Now in truth, the web in 5 years will likely have spawned hundreds, if not thousands, of new technologies and focuses.  But here, I'd like to focus on one area which I believe will drive the tech focus of the next 5 years - GeoLocation.
Even today, we are starting to see the emersion of GPS locationing playing a larger and larger role in the tech hardware, and it is just starting to bump into the web world.  Even in devices with no GPS positioning equipment, several services, like Navizon, can triangulate a position using nearby wifi spots and cell towers. We are quickly arriving at a crossroads, where every device we own can position itself on a global display.  This puts us in an arena where throughout the day, we can position where we are at any given time.  This is a goldmine in so many ways, but where do I see this breaking in the next five years?
One obvious, high monetary yield extension is in advertising.  One of the basic ways that local advertisement still beats out internet advertisement is in the precision targeting of a user.  But if we know where a user is, we can push out interest- and location-relevant advertisements to that user.  This would bump up the ad revenues by huge amounts, and I'm sure Google would be greatly interested in pumping out this technology.  Some possible tech extension also include the ability to merely push out where you are.  GeoTagging on Twitter is actually starting to take off, and provides you the opportunity to see where Twitter users are in a given area, or simply to update your family on where you are.  It may sound ridiculous, but there are truly useful extensions to pushing out that information - imagine a teenager doesn't come home one night, and a mother is terrified.  By logging in to Facebook, they access a secure page on their child's page and they can see - thank god - the Google Map shows her cell phone is at her friends house.  Sure enough, a very preliminary implementation of this does exist - on facebook, in fact.
We are currently at a state where the hardware seems to be solidifying, and there is a demand on the software.  The web world is really where this is going to have to converge.  And whatever other hot technologies are springing up in 2013-web, I think a central focus of all these applications - from pure application functionality down to advertisements - is going to have to be GeoLocation.

Future Web: 2013

In 2013 the web will be a different but still very similar to the web of 2008. Since the web is built around the people using the web and people have a tendency to not change all that fast, so will the web change: not very rapidly. We will still likely have almost all of the technologies we use today with only a few additions, if any. Technologies always take time to catch on, even major breakthroughs. It took time for things like MySpace and Facebook to catch on, and it will take time for other technologies to catch on.

One technology that will exist in a more advanced form is the semantic web. I believe that Semantic Web technologies will play a role in the future web but not in the web that will exist 5 years from now. The semantic web is a complicated idea that will be revolutionary when it catches on, but cannot possibly catch on until it has been perfected.

AJAX will be a driving factor in the web of 2013. It already plays an important part in many web applications and is progressing very rapidly. As more developers learn to use it and understand how easy it is to implement, and how easy it makes their applications to use, AJAX and similar technologies will become far more prevalent. I do not believe more advanced technologies like semantic web technologies will be common in 2013, they will exist in a more advanced state than they currently do but will still not be widely use, but, current technologies like JSON, AJAX, and many of the web frameworks currently employed in Web 2.0 will have made significant advances.

Another key enhancement that will come about within 5 years is an increase in bandwidth. Video and flash applications are already becoming quite common on the web and their content and quality is limited only by the speed of the average user’s internet connection. Internet2 helped to create the high speed Abilene Network and the National Lambdarail project that is a start to much higher speeds than were previously possible. It is only a matter of time before technologies like those make it to the internet.

It is my opinion that the every time an advancement is made with web technology and usage, a bad aspect of this new tech is introduced but is immediately countered by a good as aspect that that same technology introduced. Facebook and other social networking applications allow unscrupulous users to ‘spy’ on people, but also allow people with similar ideas to congregate together in groups and share ideas and experiences that they otherwise would not have previously been able to share. The world will be no better or worse but things will easier for everyone; people with good and people with bad intentions.

Internet2 - http://en.wikipedia.org/wiki/Internet2
AJAX - http://en.wikipedia.org/wiki/AJAX
Slides - http://www.slideshare.net/hchen1/semantic-web-20-381520

The Future of the Web

I remember discovering the internet in 1996. Back then I accessed the world wide web via AOL 3.0, connecting at a blazing speed of 14.4 kbps which was enough for me to read my online message, join chat rooms filled with young kids from around the country, and to surf ‘web pages’ that contained useful information about a given subject. Back then, web pages were static, images were low resolution and contained a handful of colors, and there was little or no standard methods driving web development.

Fast forward 10 years. The web today is cooked up in a variety of languages and frameworks and delivered to us using complex platforms that are built on a foundation of accessibility and scalability. Pages are beautifully styled, content is polished, and audiences are targeted and information is abundant. The evolution from the web of 1996 to the web of today was unlike anything we’ve seen before.

That brings us to the future. At this pace, what will the internet be like in 5 years – 2013? Some things will evolve faster than others; old ways will die out or become popular again; some new exciting technology may be introduced that changes all dynamics of the inter net, and if I had an idea of what it was, I would be rich. =)

So what will change?

Everything will be HD by default

I am a self-admitted news junky, so I spend a lot of time going through the hordes of websites providing news from various sources and reading user comments that help shape a stories impact. The single most frustrating thing about more of the news I read, though, is that the photos accompanying the story are low resolution – often 400x200 or some other 1996-era size. I predict news sites, especially mainstream sites like Reuters or CNN, will have galleries of HD pictures with each story. Each of these sites currently has a ‘pictures of the week’ or ‘pictures of the month’ photo galleries, but the images are low res. These sites feature stunning pictures that really have an impact on the story they are telling, but without access to the full quality, the whole story can never be told. So by 2013, most of the pictures we see on the net will be in high definition. The following prediction will make this possible.


Bandwidth – Not a problem

According to a report by the Wall Street Journal, the US is currently ranked 15th in the global broadband market, and that rank is falling. Bad business practices and lack of competition has stifled the growth of US broadband capabilities and we are not seeing the types of speed that countries like South Korea or most European countries. Hopefully the next president will be more open to the idea of net neutrality and take seriously our need for better access to the internet. Better broadband and more bandwidth will allow the publishing of high definition photos and video because more people will have access to them. Right now, bandwidth is the only roadblock standing in the way of the high definition web. (Companies save money by not publishing HD content; I personally find it disgusting that telecoms seem to be turning bandwidth into a commodity).


Web 3.0


The ‘semantic web’ is still a relatively new subject, with researchers scrambling to find a way to implement the technology that will once again change the way the web works. While I believe we will see successes in our progression to the semantic web, I think there will be intermediate steps along the way that promote ideas of the semantic web but lack the fully autonomous ‘agents’ that are at the core of the semantic web. We have to see a migration that is on the scale of Web 2.0 – that is we have to see incentives for business to invest in change. The incentive that is attracting companies now is advertising revenue. Mainstream companies bode well to develop rich content that invites user contribution, attracting an audience that can be advertised to. These mainstream companies play host to ‘average Joe’ users and those who are not web enthusiast who track the changes of the web as part of their thinking and understanding.

That’s why I think we will see a ‘Web 3.0’ before we see the full blown semantic web as envisioned by Tim Berners-Lees of the world. Web 3.0 will first attract users via quality, usable information found by intelligent searches and delivered automatically with the help of rdf-like languages. After successful broad-scale beta-like trials that see users utilizing Web 3.0, the big push will happen and mainstream companies will once again latch on and attract what I call the average Joes. The transition will be measurable, natural and one that will serve as the next platform for the evolving web.

Wrap up

After seeing how fast Web 2.0 and current standards came about, 5 years in the internet world will seem like a lifetime for developing technology. Hybrid applications will become the norm, IPTV will overtake regular TV, and personal web spaces will bind to users in a similar way as cell phones. Content will be, high-def be default, available for sharing and circulation, come majority from users like you and me. It is my hopes that the internet stays free from political manhandling and corporate strongholds. As net neutrality gains traction, it will the responsibility of the ‘you and me’s to ensure the internet’s freedom. The internet is the most democratic form of medium ever to exist, and the same power that is driving the web today (us) will keep it moving tomorrow.


References

“XHTML™ 1.0 The Extensible HyperText Markup Language (Second Edition)”, W3C,
26 January 2000. 1 May 2008. http://www.w3.org/TR/xhtml1/

Schatz, Amy. “U.S. Broadband Rank: 15th and Dropping”. 2007. Wall Street Journal, 1 May 2008.

Yihong-Ding. A simple picture of Web evolution. ZDNet, 5 November 2007. 1 May 2008. < http://blogs.zdnet.com/web2explorer/?p=408>

The state of the web in 2013

I am going to take the slightly pessimistic view and say that I don't believe there will be any revolution of the web in the next five years. Evolution, along existing lines will take place, but I don't believe anything major will come along and displace what we currently know as the web. I also expect the "Semantic Web" to still be gaining traction, but at its currently slow pace.

Perhaps the most significant difference is that there will be more "small screen" browsers accessing the web than "full screen" browsers. Mobile phones with real web browsers, will dominate the the User-Agent HTTP request header in server logs. The iPhone is just the beginning of enabling full use of the web on small mobile phone screens. In five years, many more people will be surfing the web from their mobile phone than from their laptop or desktop.

HTML5 will be a standards document, but browser support for it will not be complete, and even fewer sites will make use of it [1]. The good news is that HTML5 adds some minor semantic markup to the HTML specification, but it will not bring about the semantic web revolution [2]. I believe blogging software and other content management systems will have built-in support for semantically tagging content, but the majority of websites will either not use it or still develop sites with tools that do not handle semantic markup.

"Web 2.0" will have gone mainstream. Users will have their choice among a large selection of mostly interoperable online office suites including photo and video manipulation applications. And just about everything else will be accessed through a web browser, native platform applications will be thought of as "old school".

All in all, nothing revolutionary will happen in the next five years, only evolutionary extension of the current technologies.

[1] [http://blog.whatwg.org/html5-geekmeet]
[2] [http://www.alistapart.com/articles/previewofhtml5/]

Sunday, May 04, 2008

The Internet, 2013

The internet in the year 2013 will be vastly different than the internet we know and love today. For one thing, it will probably be much more regulated. Long gone will be the days of tax free shopping, and that “Wild West” feeling. Instead we find the internet replaced with a heavily secured, and monitored version that, thanks to the failure of net neutrality, has been molded into a cable tv like model where users pay depending on the content they want.

Semantic web technologies play a larger role in the web in 2013. Users have tired of entering the same profile information on every new website they decide to join. This fact allowed foaf and other similar formats to amass great popularity throughout the internet. Once a user has created a profile that contains semantic data, it’s a matter of simply accessing the data from other websites.

In order to usher in this new internet age users could be made more knowledge-able so that they would be able to format their semantic data. Web sites could also provide scripts that would format their existing profiles into ways that would work with the semantic web of the future. A few modern day blogs already produce a foaf output that users can utilize.

The internet will be a better place for the typical user. For instance, people will no longer have to worry about the numerous scams they can be subjected to via the web. Spammers will be held more accountable because they can easily be tracked down and dealt with accordingly. Parents wont have to worry about what kind of content their children could be getting into because they can control what control what kind of websites they subscribe to via their monthly bill. However the users that new and loved the old internet will have to be coaxed into embracing these new restrictions.

2013 - Return of the Wild West

I foresee the greatest changes in our society during the next five years occurring not as the result of any great new technological breakthrough, but rather as the result of the logical progression of current technology.

In the next five years the speed and ease of web publishing will economically shift information providers to primarily online distribution, rendering newspapers obsolete. This shift will threaten traditional notions of copyright and intellectual property. To cope with these changes the television, radio, and music industries will undergo radical changes.

Virtually all information of any kind will be online, and people will simply never look for information if it is not online. Libraries will move their collection into full text online documents and then shut their doors. Handheld book like electronic PDF readers will become the norm. For people who still prefer paper there will be a cheap printing and binding service (online order of course) that prints PDFs as hardcover or paperback books, and then mails them to customers.

Employers will all “google”, “facebook”, etc their employees first, and these results will highly influence their choices. Your online reputation will become more important than your credit score, and may cause you to be turned down for a job or refused for apartment rentals.

This will cause search engine optimization to intensify. Organizations will spring up to both protect your reputation, and damage the reputations of others. General lawlessness will prevail upon the web. Search engine optimization packages and intelligent personal data crawling agents will become like guns in the Wild West. There will be shootouts through the streets of the net, with nations, companies, and individuals vying for control of valuable online real estate in the forms of domain names, search results for query words, and “true” semantic information.

The victor will be determined by who possess the most technical skills, or who hires the most talented coders. These shootouts will pollute the web with false information that will leave the technical have-nots and the poor swimming in a sea of falsehood. Trust propagation algorithms will only be as effective (and trustworthy) as the coder who wrote them. The gap between the rich and the poor will increase. Everyone will have a voice, but some will shout louder than others and no one will know who or what to trust.

The problem might even get bad enough that someone will solve it.

Web 2013

Web so far- Web 1.0, Web 2.0/3.0...and observing the trend

Web was started as a means to facilitate communication and information exchange. In 1994, when launched, it was just html programming code to display text. Web was considered as 'information highway' and we saw start of dotcom boom with rise of e-commerce websites, like Amazon, eBay and email provideres like hotmail. After the dotcom bust, very few websites like Google, eBay, Amazon survived and, with the new technologies like AJAX, XML, RSS, Folksonomies, we see different genre of rich websites like gMail, wikis, Flickr, Youtube, Social networking sites like MySpace, Facebook, Bloggers, gMaps, Mash-ups. AJAX and similar technologies made applications provide faster, more seamless web user experience. Applications tend to be characterized by user generating content on web. In November 2006, there were over 8 billion websites online based on variety of interesting concepts targeting users of all ages.

[Reference: 10 Years That Changed the World]

Web in 2013:

5 more years and you'll see -

Even if it's just a period of 5 years to 2013, I expect emergence of new web technologies that'll take the web user experience to next stage, and these frameworks will be easy to incorporate for web developers. We'll see more "featured" browsers implementing HTML5 standard. Mash-up technologies will continue to facilitate combination of data from different web sources, and will provide many new interesting and useful features. We'll see many applications being provided by relatively naïve developers. There'll be continued convergence of telecoms, social networking sites, semantic web and composite applications. We have seen recent developments happening on mobile application frameworks, like iphone SDK, Google Android SDK being available for mobile Application developers. There'll be more web applications where in the channel of information source will be cellphones.

With the WiFi web and improved Internet connectivity, we'll see 'always on' social communities, and there will be more applications similar to Twitter. There will be proliferation of different virtual social groups, and we'll see highly virtually socialized generation. Social networking sites will be more expressive. Current issues like privacy, security with social networking sites will remain and but I don't expect any rise of new adverse social and psychological problems like personality disorders or digital xenophobia in just 5 years. Life online will have changed the way we think.

Search engines will be more efficient and personalized and users will be able to get exact information. Web will continue to be an entertainment platform. Technology that govern 3D online digital content will continue to push limits of multimedia for web applications. Online game applications like World of Warcraft, SecondLife will continue to be popular. These applications will redefine the entertainment web applications. E-commerce websites will have more sophisticated recommendation systems making use of semantics and online social networks. Web applications will continue to satisfy human needs and will enhance those.

We have already seen social web and semantic web analysis tools playing important role in politics, and political campaigns. Online voting will have been incorporated. By 2013, we'll see new cyberlaws that will try to resolve issues related to privacy, intellectual property/copyrights. The creative commons will play vital role in the new copyrights control laws. Internet penetration in the world will increase, giving opportunities to consider potential web users from variety of socio-economic status, culture, ethnicity and geographical location.

In summary, web will continue to facilitate our need to connect, exchange, share, compete and network in better way.

FuTuRe WeB

When I started researching this topic, the distinction between web 1.0 and web 2.0 was still unclear to me until I found the following video on YouTube.



So clearly Web 2.0 is a marketting term. And thats what I had thought. So its just the use of existing technologies in a different way that is convenient for users and attractive. Now I am in a better position to predict the FuTuRe WeB (This is how logos are constructed in Web 2.0 world according to the following video)



Jokes apart. Every coin has two sides. And I believe that future web will be a mix of good things and bad things.

Good things first.

The next generation web will be a symbiosis of intelligent nodes. Yes, the conventional client server model may not live. Rather a P2P kind of model will be in place. Let me explain why. In my opinion, heres how we reached the web that we currently have. As computers became cheaper, more and more people started buying them. Soon, the static nature of the computer data made them too boring for use as an entertainment box. Also, the information that could be found was limited. So now people started using Web. They liked it due to its dynamic nature and then with increasing usage there was a need for faster connections. As broadband was introduced, we started hosting videos and other rich content. Thus the way we use the web has changed with change in technology. And change in technology was driven by the demand for a better experience. Soon, we will reach a point where we won't have enough bandwidth to host rich high quality content. Then we will need to break the conventional client server model. User communities will share content on P2P based web decreasing the load on the servers. May be bittorrent will be run from within a browser. Browsers will act as clients as well as servers. This kind of model will be possible because every internet user will have his own domain. The names will be weird though.

As Sarah has already mentioned, we will have devices that run only browsers. Browsers will be the new OSs. Internet OS will play an important role. If the connection is fast enough, it won't be hard to download 1GB of web operating system code in few seconds. With such high bandwidth, internet OS will act as fast as the desktop OS does. Network will be used as a platform. P2P architecture will also make distributed computing over the web far more effective.

With web OS, we will be able to deal with an open platform. Apps will be portable, easily upgradable, free! and we will have a file format independent web. Open ID variants will be a norm.

Semantic web will definitely find its wide use. If we want companies to share their data, we may need to go with the licensing model or enable those companies to show ads along with the data. Semantic web concepts will be used between client and server as well as between two servers.

HTTP could get replaced by something else. Something that supports P2P structure well.

Like google has now become a verb, other terms used in computer terminology will start finding their space in the dictionary. E.g. If you don't like someone you will ctrl-c him/her. After a  conference there will be F5-ments served. If you don't believe me, then you should watch this video.



Also, very thin sized computers will be available to drive the next web.



Now the bad part

Cyber crime will be a major concern. It will be very hard to track down the criminals due to the complex nature of the new web. There will be lot of privacy and security issues. Unless we educate the users, web won't be a safe place.
Life will be too much dependent on web. Would you be able to live without touching your computer for 8 consecutive days? People will be bored with social networking sites. They will realize that interacting with real people is more important than chatting with a stranger online. Overuse of web may have negative impact on people's physical and mental health. It is better to play some real sport rather than playing a computer game. Believe me, its good for health.

Serious attempts will be required on organizations' part to stop employees from wasting their time on social networking sites. Web should be used to make things convinient and easy and to make information accessible.
But good thing is that we will have tools to measure user's productivity.
Blogged with the Flock Browser

Web in 2013

The future of web is certainly going to be mobile. Advances in mobile devices is going to shape the future web applications. Google has already shown that Information is the most important resource any company should have. Information combined with mobility would give rise to the paradigm of "Information follows the user". By 2013, wearable computers would begin to make impact on the computer industry. This would call for more and more creativity and innovation among application developers rather than just logic or knowledge. Heres a video for example .



One application that is going to grow is social networking sites like Myspace, Facebook, Orkut. These sites have tremendous potential to grow in Asian countries like India and China. Every day in the morning I check my email accounts - google, yahoo, umbc [ in order of priority :) ] and then my Orkut account. I wont be surprised that by 2013 I would be checking my Orkut account first.

I dont think semantic web in its current form which is too demanding is going to be accepted. Thinking from a user perspective I dont see any strong use case for it to exist. The wine selection example which we saw in the class didn't impress me much. I would rather select wine from the menu card of restaurant rather than query the web. Yes but we need means to share, integrate, resuse data across applications and semantic web or similar technologies can be helpful only if they blend with the existing web. Semantic web in its improved form will take some time to annotate the world's information.


Looking at the rate which blogging is catching on the younger generation, I think every person would have an indentity/personality on web. With more an more user content being generated, privacy and security is going to be a great concern.


By 2013, web would be more personalized. Capturing more personal information about your users, knowing their preferences is a key area to concentrate on. Keeping users engaged in order to learn more about what each user wants, giving him everything at one website so that he does not have to leave the site and customizing the web for users seems to be good strategy.

For example,Project Joey, allows you to customize your mobile web experience by bringing the Web content you need most to your mobile phone.




I also feel the way in which web pages are rendered would undergo a dramatic change. More 3D animation with smooth surfaces would come to exist. In 2013, browsing the web would be a completely different experience with very less use of keyboard as we saw in the video. Voice recognition is certainly going to catch on, for sure. For example you can search in Google by just speaking rather than typing. Heres a similar example of Voice Web Search:




With so much of web around, there would be groups of people who would get frustrated and form communities like "We hate computers" but they would have to form such communities online! In short, you cannot escape web.

The web in 2013

I think the web in 2013 won't be completely unrecognizable to the web users of today. To project where the web will be in 5 years, I think it's helpful to look at where the web was 5 years ago.

The web in 2003:
-craigslist
-MySpace*, Meetup.com, LinkedIn*
-del.icio.us*
-Wikipedia
-LiveJournal, Blogger,
-Mapquest
-Amazon
-Ebay
-PayPal

* - launched in 2003 (source - Wikipedia.org)

A lot of the web sites that were known to only a few early adopters in 2003 are quite mainstream now. So the websites and ideas that will be wildly popular in 2013 probably exist now in some form.

Picking out which websites today will really take off in the next five years is difficult. I think that mashups will continue to grow in popularity. Commercial mashups will be created that pay licensing fees for their data.

I see semantic web technologies primarily being used behind the scenes to improve the web of today. The average web users of 2013 probably won't know what the semantic web is, but it will enable them to find the information they need every day. Sites like FreeBase will continue to grow and applications will be built to take advantage of their free structured data. Improvements in natural language processing will help the semantic web to use unstructured data.

Applications will continue to move off of the desktop and onto the web. Already slimmed down free versions of desktop applications are appearing online. GoogleDocs and Adobe Photoshop are good examples. Cheap online storage providers like JungleDisk or XDrive will start to replace large hard drives in home PCs. In the next few years I would not be surprised to see laptops being sold with only a web browser for an operating system.

Accessing the web through mobile devices will become more and more common, so more sites will appear that are optimized for viewing on those devices.

Watching TV shows online will become quite common. Because of this, advertisers will be forced to adapt and find new ways to reach consumers. Companies will try to create or buy internet memes. For example, a fast food restaurant will buy icanhascheezburger.com and turn it into a marketing slogan.

I think the web of 2013 will be an improvement on the web of today. I believe that it will become easier to find information and manage personal data.

Friday, May 02, 2008

Facebook applications = Spyware?

http://news.bbc.co.uk/2/hi/programmes/click_online/7375772.stm

Does your social networking site need antispyware software? The BBC recently ran a test by writing a Facebook application to harvest the user's, and all the user's friends Facebook data after it was installed and run by the user. As stated, it didn't just steal the data of the person who installed the app, but from their friends too. Not really a virus, not technically spyware, and Facebook says that they will remove anything that behaves badly. Of course they would have to realize it is behaving badly first.

Thursday, May 01, 2008

Adobe opens swf/flv files to developers

Adobe has decided to open up their proprietary file formats, .flv and .swf, to developers in order to secure their spot in the future of technology development. Imagine what this will do to the mobile market...
The Open Screen Project is working to enable a consistent runtime environment – taking advantage of Adobe® Flash® Player and, in the future, Adobe AIR™ -- that will remove barriers for developers and designers as they publish content and applications across desktops and consumer devices, including phones, mobile internet devices (MIDs), and set top boxes. The Open Screen Project will address potential technology fragmentation by allowing the runtime technology to be updated seamlessly over the air on mobile devices. The consistent runtime environment will provide optimal performance across a variety of operating systems and devices, and ultimately provide the best experience to consumers.


Check out their project site here: http://www.adobe.com/openscreenproject/

Wednesday, April 30, 2008

Dilbert 2.0


So I've been a big fan of the Dilbert comic strips for a long time now. I started to actually understand some of the subtleties after working in the corporate world, which made them even funnier. But now Dilbert.com is going interactive. In addition to a nice new blog, you can now add Dilbert widgets toy our homepage/mashup site, including igoogle or myspace, exposing others to the wonderful world of Dilbert.

But I've saved the best for last. Dilbert.com has created a new mashup section, which allows you to com plete the punchline for the daily comics. The top user-rated strips get dislpayed in the top 10. Some of the new endings are very funny and creative. So if you like to start your day with a good laugh, check out the Dilbert commic strips.


Sunday, April 27, 2008

reCAPTCHA

The other day a friend told me about reCAPTCHA, a CAPTCHA-based program (attempting to block spam bots) with an interesting twist. The developers (CMU CS students) are combining a web-scale CAPTCHA solution with an optical character recognition (OCR) system. Since OCR systems are not perfect, they have trouble recognizing some scanned words that humans can typically recognize. By combining known and unknown words in their reCAPTCHA tests, they can simultaneously sort humans from bots and enable an Internet-scale crowdsourced OCR solution. They're using the human-provided text answers to help the Internet Archive project with their digitization efforts.

Recaptcha and is available for use on your own site, and they have plugins to make it even easier to add to Wordpress or Mediawiki based sites.

Copyright Easter Eggs

I was reading up on the OpenStreetMap project and other GeoWikis, when I discovered this page describing something known as Copyright Easter Eggs.

In order to defend the copyright ownership of one's map data, some map manufacturers have inserted small errors into the geodata so that if anyone was to copy the copyrighted maps, they could show that the copy was derived from their copyrighted source. Examples of this are streets that don't exists, names of streets that are slightly altered or misspelled. Churches that don't exist are also popular fake errors.

It's an interesting application of digital watermarking. Although the errors are not imperceptible (the errors are easily spotted if the subject viewing the map simply knows what to look for), the fact that the errors are fairly random and interspersed with actual accurate data and that there is no easy way to find all of these introduced errors, it makes it a seemingly fairly effective way to demonstrate copyright.

On Google news Quotes

I've been pondering how google news attributes quotes to people. Matt Hurst points out that only some people have quotes listed in google news and that some of the people who are not listed are important world leaders.

At first I thought they were scanning around quotes to match against named entities. I believed that they used a named entity tagging as part of their quotes feature. But then I noticed this well known quote:
"I remember landing under sniper fire," she said. "There was supposed to be some kind of a greeting ceremony at the airport, but instead we just ran with our heads down to get into the vehicles to get to our base."
Which was sourced to Some Article. But in the quote there was no mention of who "she" refers to. Inside the article the nearest named entity to the quote is Sarajevo which is a place (but still a named entity), Hillary Clinton is mentioned by name further up and is the only woman mentioned in the article. This shows evidence that Google news is doing some sophisticated processing to get these quotes. I wonder if Google disambiguates between multiple people of the same gender?

And then Just when I'm starting to get impressed with Google all over again I check out this quote:

"Let me tell you something," she told labor leaders firmly in Philadelphia. "When it comes to finishing the fight, Rocky and I have a lot in common. I never quit. I never give up."

Which they source to The New York Times, but when I follow the link the quotes not there! I'm getting flashbacks to past Google mess ups. I know it was only an estimate... but this little things are starting to pileup. Anyone else noticing that Google is getting a little bit sloppy?



Blog Talk Radio


I enjoy listening to Farai Chideya's radio program News & Notes. While listening to her Blogger's Roundtable discussion last Wednesday evening I learned about Blog Talk Radio (BTR), a web-based service that provides a free solution for hosting and listening to live radio-style interactive talk shows. This appears to be a very effective union of old & new media formats that allows anyone with a phone and a computer to create an interactive call-in talk show. To quote the BTR site, Blog Talk Radio is:
"Internet Radio, Citizen Broadcasting, Social Media Podcasts" ... "Empowering citizen broadcasters to create and share their original content, we can now access a richly diverse, sometimes balanced, often peculiar, mosaic of the Global Human Voice."
The site includes a number of features to support live interactive broadcasts, including web controls to allow the host to activate or deactivate up to 4 call-in guests (via telephone or Skype), a chatroom feature, and music, sound effects or commercials to be played during broadcast. Once completed the programs are subsequently made available as MP3 podcasts, RSS feeds, or via iTunes. Programs are broadcast as Windows Media streaming audio (so on a mac you may need a Windows Media Player program or browser plugin such as Flip4Mac to tune in live). In addition to the live broadcast features, the site also provides profiles and blog spaces for both hosts and listeners. They also offer a flash player widget that hosts can embed within their own blog or web page so listeners don't have to navigate to www.blogtalkradio.com.

The site is free for listeners and hosts, because it is ad revenue driven. BTR offers a revenue sharing program to split the revenues between BTR and program hosts, with rates that depend on who brought in the advertiser. They also provide reporting tools to measure the listening audience and ad revenues.

In my own quick review there appeared to be many interesting programs offered with tremendous diversity, exhibiting a range of production quality. Blogs are reasonable for exchanging ideas casually (asynchronously), but when you really want to immerse yourself in a topic, multi-party interactive programming is even more fun. BTR seems to me to be one of those natural solutions that appears obvious in retrospect, because it makes a lot of sense. For modern, interactive and engaging information sharing, what could possibly beat BTR? (Hmmm... Now I will have to dig deeper into Interactive TV which I learned about at BTR!).

Mashup Camp

I was just reading about a yearly conference called Mashup Camp, or "The Unconference for the Uncomputer". The idea is to bring together a bunch of mashup developers API/technology providers and come up with new innovative creations that really push the envelope in the world of mashups. There is no prescribed agenda in the "Unconference" (following the "Open Space" methadology), instead there are a number of leader-moderated sessions in which attendees drive the discussion.

Mashup Camp seems like an interesting experience, and if I ever end up getting into a Web 2.0 business or project, I would definitely be interested in attending.

Disqus: social commenting system

Disqus is a very good social commenting system that you can integrate with your current blogging system. It lets you to better track comments on your blog posts. Generally, you are required to subscribe to the entire thread. But, most of the times, you only want to see if there are any replies just to your post. So, you usually don’t subscribe to the entire thread and would think that you’ll just come back to the article and see if there are any replies to you. With Disqus, you only get replies to you emailed.
Disqus also provides other useful features, like RSS feeds for comments, building clout based on the votes to your reply comments, and widgets that display top commenters, recent comments, and articles with the most comments on them etc.

Aggregating social network data into single feed

There are various services available out to aggregate data from all the social networks that you belong to, and provide you and your friends with a personalized feed. Examples are readr.com, FriendFeed, Plaxo Pulse. Plaxo Pulse allows you to use OpenID for sign in. There are also Adobe AIR applications to receive updates from FriendFeed, Twitter like Alert Thingy, Feedalizr. Moreover, there is 8hands.com that connects all your social networks to your mobile device.
It's interesting to see so many supporting auxiliary applications being developed around social networks.

Grou.ps

Grou.ps is a website that combines a set of tools like blogs, photos, wiki, forum, chat etc. to overcome the limitation of linking these things together and multiple sign on requirement. You can synchronize your photos from flickr, add/remove modules you want. It does not show ads on your pages nor it shows its own logo. Moreover, you can add your own ads on your group pages and make money.

I don't know how they make money from this service.
Blogged with the Flock Browser

Timeline Mashup

Lifehacker recently posted about Dipity. Dipity lets you pull data from other online applications and RSS feed and presents it in a time line.

Skoogo - Information networking

The web is changing. User contribution is now what makes or breaks a site.But letting userscontribute content can sometimes produce results you did not think of. skoogO is targeted at students who want to share information.It has a question and answer format,just like yahooanswers.Students ask question which are answered by fellow students.But when i visited the sitemost of the questions were non-academic like "I know how to make money online: go to ...""Why are gas prices so high? Is it still the War with Iraq? "

Saturday, April 26, 2008

Groups - A online social group platform

Groups is a social groupware service for online communitiesIt is a platform for social groups to get together.It is very easy to setup and customize your online group by integrating pre-existing modules such as blogs,talks,people,maps,photos, live chats. It has all the features like wiki, photos, links, blogsIt is completely free and does not have any kind of ads.

wickd: "tech fashion"

Found via trendwatching.com's April Trend Briefing, a clothing company in the Netherlands will put a ShotCode on a shirt. A ShotCode is a "barcode"-like image that encodes a URL into a small graphic. The website provides a mobile phone application that uses the camera on the phone to read the ShotCode and then open the phone's browser to the encoded URL. What is interesting is that wickd will create a ShotCode for your personal URL and then put it on a shirt of your choice. So you can now take your online social presence offline.

Mash Maker is in public beta

For those that haven't seen it yet, the Intel Mash Maker program is in public beta. You can now download the plugin and try it out. The gallery, unfortunately, has a lot of duplication, no doubt initial users following along with the posted videos and creating the same mashups and publishing them. There are a couple of interesting mashups show charts and calendar/timeline views of Yahoo Finance data. Another cool one is showing Pollstar and Stubhub data for the current artist playing on last.fm.

Friday, April 25, 2008

Social Networks Mirroring Rality TV

According to a study conducted by University of Buffalo and University of Hawaii, young people who like to watch reality shows like American Idol are more likely to accept unknown friend requests and are interested in social networking. The researchers say claim that "online social networking may not be a fad. Beyond its usefulness for communication, personal expression and directory look-ups, the sites are also working in sync with some of the biggest cultural trends at large"

More information and links can be found here.
Blogged with the Flock Browser

Twitter

Earlier this month an American student was arrested in Egypt while trying to photograph a demonstration. The student used his mobile phone to send a message to Twitter simply stating "Arrested". His Twitter readers were then able to contact the US Embassy and the media to draw attention to his case.

The TechCrunch article about this has some interesting comments. Everyone seems to agree that this is fantastic PR for Twitter, especially with the outages and personnel turnover going on at the company right now. There also seem to be a lot of Twitter haters.

Thursday, April 24, 2008

Bluestring = mashup?

Bluestring is an online photosharing site owned by AOL that allows users to create collages and videos to share. What's really interesting is that you can use media stored on Picasa, Photobucket or Webshots without having to re-upload a copy to Bluestring. It accesses pictures stored on other sites through their APIs. Mashable has an article going into more detail.

Would this be considered a commercial mashup?

HousingMaps

Is a website powered by craigs list and google maps. It only works with select cities but it allows you to view housing for sale or rent. You can query by price range and then sort the results by things like pictures being available and number of bedrooms. The description field contains the link to the posting on craigslist. It would be cool if someone made a mashup that could search and map out the various types of things available on craigslist. Links anyone?

Google & Orkut

An article posted on Yahoo news discusses Googles' decision to hand over files on suspected pedophiles on Googles' social networking site, Orkut. Although Google has fought for confidentiality in the past, this is reported to be the first time they have complied. Online discussions are fiercely arguing in terms of greater good vs privacy. What do you guys think?

Wednesday, April 23, 2008

Semantic Data Storage

Since we had another lecture delving into the semantic web I thought I would talk about some semantic web technologies, like oracle 11g. The approach oracle takes is to use their existing database technology and augment it to support owl constructs and inferencing. Oracle 11g holds a table for each semantic model, the structure of these tables is always the same. These data tables have 3 columns, one for each part of the triple, in addition to these 3 columns they have columns to provide a unique identifier to triples, a column to link back to the model this triple belongs to, and a bunch of columns that are reserved for future use. Oracle 11g automatically maintains views for what tables belong to what data models, and a view for triples associated with models along with other views.

Oracle 11g supports inferencing using rules and rulebases. Rules are basically if then statements, and rulebases are their containers (implemented with tables). Oracle 11g comes with a set of common rulebases for RDF (and variants) and OWL. Rules can be created and inserted into rulebases. Inferencing is further supported with rule indices that hold precomputed values for rules. Oracle 11g also provides functions like SEM_MATCH and SEM_DISTANCE.

If this brief overview has wet your appetite, check out for more.

Microsoft's Live Mesh

Microsoft will soon debut their new web-based software platform, known as Live Mesh, which is intended to compete with similar offerings from Google and Amazon. Their hope is to blur the line between desktop and web applications (much of the buzz today) and develop software to sew our digital lives together. Their software will "mesh" together your Zune, Xbox 360 and your Windows media into a synchronizing, accessible-anywhere digital repository. Read the full article over at NYTimes.

Monday, April 21, 2008

Missing Link

Okay, so class got me thinking.

Let's go with the idea that semantic is what we described in class. A method of structuring and conveying the meaning of data that is semantic, machine readable, and reusable.

However, that's just the data and information you wish to convey. That is not the display of the data. Part of the weight behind Web 2.0, the web as it today, is the rich, visual experience. People love a rich interface.

The issue with HTML is that people are attempting to structure their data in it and to display it. CSS separates, mostly, design from content. However, the display structure of the document is still tied into the structure of the data.

So to fully reach that metaphorical island Dr. Chen spoke of, we need not only to publish and share our data in a semantic, machine readable and reusable manner, but we also need a new manner in which we create the rich, artistic, and accessible interfaces that is completely separate from how we mark up our information and data.

What the new mark-up or software or technology is that enables this is, I'm not sure. However, I think such a thing, teamed with semantic data, would be a powerful team.

Sunday, April 20, 2008

Wikipedia Scanner

I learned tonight (reading on Wired) about Wikipedia scanner which lets you look for Wikipedia posts by organization name. I understand Virgil Griffith (a Caltech grad student) hosts a database of Wikipedia edits by IP, date, topic, comments, etc., including the reverse DNS lookup info for all the IPs from which the edits originated. The site is set up so you can enter an organization name (e.g. Microsoft or Department of Energy) and see what topics someone from those organizations contributed to or commented on, and it then provides specific edits in Wikipedia "diff" form. It can be politically touchy- I guess that's in part why it was set up- to see what organizations are putting shameless spin on what topics. Wired hosts a crowd-sourced "salacious edits" collection that is fun to peruse or contribute to.

This is a great example of combining two distinct sources of public information to make the information much more useful and interesting. Equally interesting would be the Wikipedia scanner's own collection of what IPs and organizations are reviewing what Wikipedia posts...

Social Autistics

We spoke in class about how new Internet technologies can provide critical social connection tools for groups with very specific interests, such as support networks for families with specific medical conditions. I read an inspirational article in Wired about how individuals with autism have been using blogs and even Youtube to to network with other autistics, researchers and the public, and further pursue information campaigns and political agendas. Recent scientific publications are challenging the conventional wisdom that the majority of autistics score in the mentally retarded range on standard intelligence tests. The article makes the point that autism is increasingly viewed as involving developmental differences and not necessarily a disease that results in a defective brain. The opinions on this matter vary greatly, as do the levels of disability that result from autism. Nonetheless, autistic individuals who are able to employ social networking technologies are now reaching out in novel ways and redefining the way we view and relate to them.

The secure social web

How do you know that that person you are talking to on Facebook is really your friend from high school? 50lbs plus or minus and a person can look a might different...

How would a secure social web change how we interact online? Assuming everyone authenticates to a central authority and it works reasonably well, would we change what we do?

You could sign everything you post with your cryptographic signature, so people can trust it came from you. No more hijacking with a simple password to send stupid messages when your friend goes for a snack. You would be held more accountable, but you could hold that same accountability to everyone else. How would treat people that were not properly authenticated? With a level of distrust, questioning what or who they really were?

Granted there are about 300 million implementation problems but it will get here in the end, as it actually helps the **AA companies, not that that is really a plus...

Social Networking in the Classroom

This article talks about how use of the Internet and Web have caused problems in the classroom and how some schools, University of Chicago Law School in particular have taken extreme measures to counter this trend.

http://www.insidehighered.com/news/2008/04/18/laptops

What does someone do on their laptop in class? Are they talking with friends, checking email, playing games, doing work for another class or are they actually taking notes, following along in the online slides or some other productive use?

Social Networking works well outside the classroom but there does not seem to be much use of it inside the classroom. One could attribute much of this to the behemoth that is Blackboard. For whatever long list of problems people have with the system, it has remained powerful. Recently, Blackboard lost a patent case which had a great number of its patents invalidated. What is interesting is where social networking technologies could be used in the academic environment. Blackboard has its discussion board but this is not realistically real time and not meant to be used as such.

Merge a technology like Twitter or AIM with Blackboard and bring the students back on topic by allowing them to have backchannel discussions during the lecture. This could lead to new insights, questions and involvement that could not have come about before. The possibilities for both good and bad use of these technologies is quite large.

You can try so hard to stop the future of ubiquitous computing, why not embrace it and make it do something useful?

Sazze

It is a Web 2.0 style product review site that is driven by underlying social community of its users. "Sazze.com is based on a reputation system that takes the idea of giving and receiving positive feedback as an incentive for increased interaction between users" You get better reviews, create polls. Here is the faq for more information.

WaveMaker

WaveMaker is Free Ajax development studio. It is a visual builder that has drag and drop features. It comes with another tool with rapid deployment feature called PushToDeploy. Here is the list of features from the site:

* Drag & Drop Assembly
* LiveLayout™
* Push to Deploy™: One-touch application deployment
* Visual Data Binding
* SOAP, REST and RSS web services
* Leverage existing CSS, HTML and Java
* Deploys a standard Java .war file
* Free

Aviary : Rich Internet Applications for Artists

From image editing to typography to music to 3D to video, Aviary offers tools for artists.
There have been tools by Picnik and Adobe Photoshop Express for photo editing images, but Aviary promises 18 tools (all with bird names) that offer additional features including pattern editing, 3D modeling, andvector image creation or editing. Aviary is a website for artists of all genres to create, edit and share their works directly in their favorite browser.
The most interesting found in their product blog, is called Dodo, a web based time machine!

The service will allow you to upload image of people, places and things, and you can provide the years, it will age or de-age things in the image, using the Astley-Zonday time displacement theorem with accurate results. Below is the Demo video.