Monday, May 12, 2008

Spring/Webwork/Tomcat/Maven?

Personally, I found that using the Spring/Webwork/Maven combination to be a pretty sweet set of frameworks to develop for. There was definitely a huge learning curve (see misc. project presentations on others experiences), but once you get past that, they provide great flexibility.

Maven provides a lot of heavy lifting capabilities... It lets you take this one file, pom.xml, and via its magic, all external code required to build and deploy your small module is automatically lifted from the Gnizr servers, your code is automatically compiled, and any files you generate are overlaid on top of the delivered code and automagically built into a war file (the standard for deploying web services). And I know from personal experience, manually compiling Java stuff/Tomcat stuff and trying to build a war or jar archive can be pretty daunting if you are trying to do things manually.

Webwork gives us a nice Model-View-Controller implementation to work with, supporting different languages for the view aspect including JSP and FreeMarker Templates, and tying into Spring for dealing with Singleton/DAO objects and friends.

Spring makes it nice and easy to pass common objects into other objects, and control initialization of different things magically via XML, and with Maven makes it easy to extend the existing spring xml w/ new xml.

For a simple project, Spring+Webwork would probably be overkill, but for a system designed to evolve over time and be used and edited by many users, I think it (or something similar) would be essential for organizing things in a sane manner. Maven on the other hand would most likely be useful in projects of any size.

Subversion: Necessity or Annoyance?

Personally, I find a good version control system to be a core necessity to every major programming/scripting project (major being lets say either > 5 source files or >= 2 people working on project). When used properly, they make it trivial to track all changes to the codebase, why that change was made, and allow specific changes to be reverted if an earlier version of the code is needed.

Also, when more than 1 person is working on the project on separate accounts and/or computer systems, it allows users to make changes to the code, commit those changes and have those changes show up on others copies of the code (without having to ever manually worry about manually merging changes together). And in complex projects, even if you are the only one working on the project, having the ability to revert changes is key (also automatically having your code backed up on Google's servers is definitely an advantage :-).

So... Agree? Disagree? Unsure? DISCUSS!!!

Sunday, May 11, 2008

Web 2.0 at the Department of Defense

http://www.pcworld.com/article/id,129328-c,internetnetworking/article.html

The Department of Defense's Defense Intelligence Agency has been attempting to use Web2.0 technologies, starting with a wiki, since 2004. They are attempting to leverage wikis, mashups, blogs and RSS feeds to assist their analysts.

This is a departure from the standard use of technology in government where upgrading hardware and software could go for years and the latest and greatest web technologies are shunned for both security reasons as well as misunderstanding of their uses.

While we won't see these technologies in all levels of government for valid reasons, in certain aspects of the government we could see these technologies used more and more.

Add some Wesabe to your Mint

http://www.boingboing.net/2008/04/24/wesabes-new-reccomen.html

Wesabe takes anonymized purchase data to manage your finances better and has recently launched a system to help you purchase the same items at other locations. The system seems like it would be a nice mix to go along with Mint.com in an overall financial strategy.

Of course when the system says I should buy at Walmart instead of Target, I'll have to refuse that advice. The next step in these financial systems should start monitoring the social and green aspects of the places you are shopping.

Semantic Hacker

http://www.semantichacker.com/

Semantic Hacker has an open API for semantic discovery and is running a million dollar challenge to use their system in new and interesting ways. The system seems to pull out semantic data from any text source. There is an example on their site where you can paste text and get out the data. It seems mostly like a keyboard analysis system. It would be interesting to have someone comment on the system as it stands.

"Develop a software prototype, business plan or both with commercial viability that is focused on a vertical market. Solutions in finance, healthcare or pharmaceuticals might be good places to start."

SocialDevCampEast - Report from the field

Or day after the field.

https://barcamp.pbwiki.com/SocialDevCampEast

If you have ever been to an IEEE or such conference, you know that everything is planned ahead of time and there are committees for everything. It works well, but for a smaller situation like the Social Dev Camp East that happened yesterday, the power of Web2.0 technologies came together to promote a smooth flow throughout the day.

Take a look at the wiki link above. There is a proposed schedule. At a regular conference, that would BE the schedule. At the camp, there weren't even defined sessions before the campers voted on them in the morning. You can see the proposed time schedule got pushed back as people probably showed up a little later than expected. The actual schedule exists on the page along the proposed, showing the differences. Updated in almost real time, this is a huge break from the monolithic existence of regular conferences. It was encouraging and impressive that such an open source type of event flowed so smoothly.

The actual content of the day, and the overall purpose as I saw it was interesting. At a regular conference, the sessions might go for two hours each, starting at 8am and ending at 5pm. This camp has one hour sessions ending about 4pm. Each session was about an hour long on one topic. Since the topics were decided on in the morning, the presenters varied in their preparation. Some were ready to go with organized presentations, others were thrown together according to the requests of the campers.

Two sessions in the mornings with discussions during led to a discussion filled lunch followed by another two sessions in the afternoon. Each session had four different topics in individual rooms for a total of sixteen topics throughout the day. Following the sessions there was a camp sponsored open bar down the street at The Brewer's Art. After sixteen different topics were covered, people from D.C, Baltimore, Philly, NYC and other places with all kinds of technical backgrounds, made for plenty of conversation for hours after the official sessions were over.

In CMSC 491S/691S we had interesting lectures that led to discussions that had to be halted as the end of class came. Imagine sixteen lectures in one day and hours of discussions afterwards. Contacts, friends and business opportunities were made and the results of the camp will echo for quite some time.

Another camp will be held in Fall 2008. Baltimore, New York, or someplace else hasn't been announced but it will probably be bigger and better than this one.

Tuesday, May 06, 2008

BarCamp East Saturday May 10

I heard (via an old-school radio broadcast- Mario Armstrong's Digital Café short program) about an interesting SocialDevCampEast event this coming Saturday, May 10 (8:30 am - 1o:00 pm) at the University of Baltimore. This is described as "an Unconference for Thought Leaders of the Future Social Web". It would appear that over 160 participants have signed up in advance, including UMBC folks (UMBC is a sponsor). This is one in a series of BarCamp workshop conferences distinguished as low-overhead, self-organized, user-generated, open participation style events. The event site lists a number of diverse proposed sessions, many of which should be of interest to our class members. Frankly, I'm shocked I've not heard about this event via class or our blog! (Wish I could attend, but it's not to be).

Monday, May 05, 2008

I Need Information…

Whenever you need information to help you make a decision, wherever you are, that is an opportunity for and computing technologies and the Internet to help provide the answers. This basic premise underlies much of the computing revolution over the past few decades, although in recent years much emphasis has shifted toward the wherever you are aspect, as students and business professionals embrace mobile Internet-enabled devices. What sort of information are we talking about? Of course there’s plenty of well-structured “data” like information to be had- everything from stock quotes, product prices and information, sports scores, weather, and descriptions about people, places and things in the world, to name a few examples. As the relative cost of storage and computing has plummeted, rich multimedia content is now commonplace- images, audio, and now videos are ubiquitous on the Internet. But our most recent Internet developments have encouraged broad-based publication of more information of the “knowledge” sort. As a society we are increasingly valuing news, blogs, messages between friends, about people and events, and social commentary that the masses themselves are generating. We refer to this as the long-tail effect in media production [1]. I was struck by the statistic Akshay Java cited earlier this week- that publicly generated text content dominates professionally edited text content by a factor of four or five to one [2].

Today and tomorrow’s Internet capabilities are truly dependent on wireless communication technologies and inexpensive electronics- those of us in the software business should not forget that. Computer electronics products allow us to consume all kinds of information in a multitude of forms to suit our lifestyles- some of us sit at our computers reading e-mail, web pages, blogs and videos; others listen or watch streaming multimedia on their home entertainment systems; still others listen to podcasts portable audio players; and the mobile computing crowd does all of this and more on their computer-enabled phones (or are they cell phone-enabled computers?). In the next five years we will see all of these modalities grow in popularity- as home computers blend in with entertainment systems to support diverse sources of streaming audio and video (TiVo, Sling, Roku, Apple TV), and we take the most miniaturized forms of these electronic capabilities with us everywhere (even on highly active excursions like hiking, climbing, running, boating & swimming!) [3]. Mobile devices will be made tremendously more effective and convenient with Internet sites, services and applications geared toward convenience- beyond local content (traffic, shopping, weather, news, and events) and multimedia playback to include diverse communications- phone, messaging and even social networking [4,5].

When looking five years ahead, however, the most notable theme I anticipate is a simple twist on the long-standing premise that I began with: Whenever and wherever you would like to produce information, that is an opportunity for and computing technologies and particularly the Internet to help provide the means. We’ll be doing more than taking some geo-referenced pictures and uploading them- we’re talking about providing the context and commentary; micro-blogging on a massive and distributed scale; crowd-sourced coverage of live events; and even organizing flash mobs to create the events [6]. We will not just be consuming the information out there- we will be interacting with the world, and each other, and literally making the news.

I need information… and I need to produce information!

References:

[1] Akshay Java, class lecture on 2008-04-30 http://socialmedia.typepad.com/blog/files/socialmedia.pdf

[2] “The Long Tail” described on Wikipedia http://en.wikipedia.org/wiki/The_Long_Tail

[3] Travis Hudson “All Nike Shoes to Become Nike+ Compatible”, article in Gizmodo 2007-03-26 http://gizmodo.com/gadgets/portable-media/all-nike-shoes-to-become-nike%252B-compatible-247097.php

[4] Ellen Uzelac, “Mobile Travelers: Wireless devices, such as GPS units and cell phones, are transforming the way we vacation”, article in Baltimore Sun 2008-05-04
http://www.baltimoresun.com/travel/bal-tr.techtravel04may04,0,7453953.story

[5] "Mobile Social Network" described on Wikipedia http://en.wikipedia.org/wiki/Mobile_social_network

[6] Madison Park, “At harbor, 80s-tinged flash”, article in Baltimore Sun 2008-05-04 http://www.baltimoresun.com/news/local/bal-md.rickroll04may04,0,4649727.story

UMBC 2013

I’m envious of you kids starting out now. Back in 2009 when I started at UMBC I got my new quad core MacBook running Leopard. Of course Lynx and Cougar have come out since then, and they run fine on my machine. A little slow running iLife ’12 but that is to be expected with the haptics they are running through the new iPods


Of course it doesn’t really matter how fast iLife is, I still get along fine online. That is where everything is these days. I opted for a 500GB drive when I got my machine, but it is barely full, four years later. Everything is online. I use Google Documents for the few text docs I need to exchange with some backwater friends and keep everything else in my personal cloud out in Rwanda. Lack of natural resources, a growing population and increased prices from India and China, drove outsourcing to new locations. Technically, my data isn’t in Rwanda, it is all over the world. Natural disasters aren’t a thing of the past, but it is maintained by les Rwandais. It works fine as any in Bangladesh or Peru that my friends use. Enough about my extracurricular activities and me.


After Blackboard lost their patent suit while I was in high school, a swarm of open source systems sprang up to devour the previously off limits IP. UMBC switched over to one of them a year or so ago and it has been great. Built on the Mozilla Facebook platform, it has enabled collaborative wet dreams that professors trying to prep us for the real world could have only dreamed of when I was in junior high. Hell, Facebook barely existed back then. They had millions of users, sure but the interface was so simple. I looked you guys up before the tour and good job, you’ve learned how to use the fine grained access control, you’ll appreciate that in four years. Social graphing and reputation markets were just topics of research, but you use them everyday.


You may have heard of professors turning off the Internet back in 2008, that doesn’t happen at UMBC, it is used throughout the disciplines. Now that the semantic web is more reality than pipe dream, the web is more useful than ever. That required system they are quoting down at OIT and the bookstore is just a minimum, you might want to get something a little more powerful depending on your major. Right, you’ll want something more powerful whatever you are doing. Granted, much your system will exist online, but much of the rendering is done locally, so get an Intelvidia or AMD-ATI card in whatever you get. School is what you learn, but it is also who you network with, now more than ever in this connected world.


The world may be a little hotter, and gas a lot more expensive, but it sure seems like web technologies have helped make it a better place. Whatever you do here, you will be connected to everyone else on campus and around the world. No longer hindered by travel requirements, you may get a guest lecture from a Swedish designer on an all night design treatise or an explanation of biochemistry from Vietnam. They don’t call it an Honors University for nothing.


Thank you for visiting UMBC today. I know being outside in the big blue box on an August day in 2013 isn’t exactly your idea of fun, but it is good once in a while. For those using the iRobot telepresence units, please send them back to the Commons before you logout or you will be charged extra. School is a place you learn, not a place you are, but you might want to get on campus once in a while. You have such opportunity ahead of you and I think you have made a good decision choosing to go to UMBC. Class of 2018, my caps lock off for you! Sorry, that was a very very lame joke that I know only two of you understood.

The Web in 2013

In 2013, the web will be fairly similar to the one we all know and love today. There will be several key differences however.

The first and most obvious will be the adoption of XHTML 1.0 Transitional as a standard. Some web sites will be using pure XHTML, but the mainstream will be stuck on 1.0 Transitional due to the high percentage of users still running Windows XP and Internet Explorer 8.0 (the last supported version of IE on XP) with its horrific XHTML Strict rendering.

Another key difference will be the proliferation of high-resolution video advertisements. With 50Mbit fibre-to-the-home being standard, and 100Mbit available in some startup markets, high-definition video ads will have all but replaced the static and low-res animated graphics of 2008. Upon visiting sites, users will be bombarded with motion, forcing them to click on the ads or risk being sent into epileptic seizure.

Instead of the traditional Flash, video ads will be streamed out in standards-based MPEG-5 which will become the ISO standard for compressed 2160p High-Definition content. Thanks to the standards-based codecs used, playback in browsers will be accomplished via built-in code, no special plug-in will be required to view the embedded videos.

By 2013, frames and tables as layout crutches will have been all but eliminated from modern web sites. Instead, well-placed div tags will denote content while CSS scripts will tie everything together, effectively separating content from layout once and for all. All still using XHTML 1.0 Transitional however...

Instead of writing code by hand or using current craptastic programs that generate unreadable code, web developers will use a free open source toolkit for WYSIWYG development that generates completely readable XHTML and CSS code (including ECMAScript glue code that automagically works around known browser bugs/deficiencies).

These tools will use advanced NLP algorithms to automatically add semantic attribute data to pages. Users can manually tweak this attribute data (which will be represented as RDF embedded in XHTML), but for the most part the automatic processing will greatly increase the searchability and indexability of all web documents by providing standard semantic attributes which can be used by both search engines, mashup engines and query services.

As for the question of whether or not the web will be "better" than it is now... Things will be much more standards-compliant, although not necessarily compliant to the standards of 2013, but by todays standards an incredible improvement. In fact browsers of 2013 will block and refuse to render any pages which do not validate due to unspecified behavior. Because advertisements are more graphical, they tend to be more distracting than what we deal with today, but those same technologies allow for advanced interfaces with fluid 3-D effects and transitions making everything feel much more interactive.

But when it comes to actual content, people will be using this new enhanced web in much the same way as we use ours now, it will simply be flashier, interfaces will be more animated and fluid, and it will be much less bandwidth-efficient.

References:

http://en.wikipedia.org/wiki/XHTML

http://www.theregister.co.uk/2008/01/24/h20_sewer_rollout/

http://www.devarticles.com/c/a/Web-Style-Sheets/DIV-Based-Layout-with-CSS/

http://en.wikipedia.org/wiki/ECMAScript