Thursday, February 28, 2008
Tag cloud search
It comes as a widget. So you can easily add it to your web site too.
Facebookster
There are numerous complains about Facebookster's business. No official words from the company that deny those allegations. No reliable source that confirms this is a "spammer" or "scammer" business. Until then, I suggest everyone stay away from the company.
See my comments post.
Facebookster is a Facebook application development company. Its business objective is to "make money with Facebook". The company has built a handful of FB applications.
According to its Web site, the company employs 100+ developers and designers and provides Facebook and social network related consulting to Fortune 100 clients.
The company published an essay on "How to make money with Facebook". In which, it talked about seven ways to achieve this objective:
- Selling ads
- Sponsorship
- Sell goods and services
- Write a Facebook book
- Write a Facebook blog
- Develop Facebook apps as a consult
- Sell your Facebook app
Can you imagine what it will like if FB went down for 24 hours? Will it have a similar impact as if Google or Yahoo! went down for 24 hours? Which is worse?
Wednesday, February 27, 2008
Social Networks for Conferences
Google Social API
Sunday, February 24, 2008
The Semantic Web is expected
The Personal Semantic Web
List of TheBrain common uses: TheBrain
Taglocity Transitions Tagging To The Enterprise
Tagging online ads for web analytics
Searching/Tagging in Enterprises
1: R. Barrows and J. Traverso. Search Considered Integral. ACM Queue, 4(4):30-36, May 2006.
Tag Hate
Friday, February 22, 2008
visualizing tags
First, there is the ubiquitous tagcloud.
But tagclouds aren't the only way to explore tags. Some other examples:
Delicious Soup
Fidgt - uses "Tag Magnets" to find other users with the same tags.
Mind My Map
Images of Mind My Map and Delicious Soup are from Visual Complexity
Notorious: Enterprise social bookmarking
The Future of Tagging
Human tagging will be needed for problems that computers can not easily process. I see this as primarily human evaluation. Humans can make value judgments (Good/Evil, Happy/Sad, Positive/Negative) that machines are incapable of making without human guidance. Personally I see tagging being applied by people to people. I can envision people tagging other's blog pages with their own value judgments, as the primary form of tagging.
The Good, the Bad, and the Hammer.
Social networks are a technology, just like the hammer is. Is the hammer good or bad for society? Hammers can be used to build houses or to smash in windows... not to mention all the cheesy murder mysteries on the tube involving a bloody hammer.
Obviously the hammer is a much simpler technology than a social network is but I am of the opinion that both technologies are of the same fundamental nature; as technologies, they are both tools used to carry out a task. As such, both hammers and social networks can be good or they can be bad.
In class our debate was less about the technology of a social network than it was about what social networks are currently used for and at one point the 'Wild West' analogy was made but we did not really run with it. Where I work we have a social website that is only accessible from inside of the building. We use it to escape from lotus notes email (forums, hurray!), schedule conference rooms, leave work related messages as well as figure out who has certain skills necessary for a specific task. The issue of privacy is completely removed from the equation. This network speeds up work place efficiency by spreading important news effectively and keeps itself from being a time sink since it is used only for work related information. If the privacy issues of Facebook and MySpace equate to an application of technology that is bad for society then surely the application of the technology at my work location is an example of a good application. Our debate in class also followed a similar approach. An example of a good application, like organizing people for a rally would be given then followed/countered with an example of a bad application, like an internet stalker.
If the current status of social networks on the web is equivalent to the rough, lawless times of the Wild West then the enterprise level, closed social network where I work can be likened to the Sheriff rolling into town and law and order taking a foothold. Just like the Wild West was developed and settled, so will social web technologies. But, like any tool or technology, there will always be good and bad applications depending upon how people choose to use them.
Social network for corporate travelers
It gives us an idea about how Web 2.0 could be used to share information with others. It has enabled us to share information that was rarely available otherwise. You can rely on information that you get from people you know rather than relying on some other source.
Wednesday, February 20, 2008
Microsoft wants YOU...
Is this Microsoft's attempt at becoming more "open"? Will we see student's flocking to download Visual Studio or Expression Studio for development? I've used Visual Studio as the primary development tool through most of college and at work from time to time. I certainly see the value in having it, but is this enough to steal users away from Eclipse and other open/free IDE's?
Tuesday, February 19, 2008
Categories and Tags
Categories and Tags
Sunday, February 17, 2008
An example of the social web gone bad
Semantic Web Should Improve Search
Semantic Web Case Studies
Last.fm - tagging your musical life
Advertising on the Web
Then there are the "shoot-the-monkey" type ads. Basically you click on this moving monkey or something, and then it tells you you won a prize since you were able to click on it. But it actually ends up forwarding you to some site where they want you to buy something. Susan Kim (creative director at advertising.com) has an interesting article (http://www.imediaconnection.com/content/18040.asp) describing how the appropriate use of "rich media" (essentially anything beyond a simple .gif or .jpeg) can increase clickthrough rates by up to 14,000% or more. But do it wrong, and you'll simply turn away potential clickers.
It's a constant struggle between the advertisers and the consumers viewing the sites. Consumers don't want to deal with ads, and will use things like Firefox's AdBlock plug-in to automatically hide ads on pages. But advertisers want consumers to click their ads, and possibly end up spending money at their advertiser's sites as a result of that click.
Honestly, I find most forms of advertising annoying. But some ads are actually fairly clever and even somewhat useful/relavent. Like when I do a Google search for "National Rental" (trying to find the home page of National Rent-a-car, the first thing that shows up is a "Sponsored Link" to National Car Rental (essentially an ad for National that they paid Google to automatically place when certain search terms were entered). This is useful, since I was trying to navigate to the National web site anyways, and the ad provides a useful link to the place I was trying to get to.
Advertisers need to find that happy balance between "my head hurts from looking at this advertisement" and "I didn't even notice that, it blended right in." I think they are making progress, but as Web technologies evolve, I'm sure it will be an ever-changing field, especially since things web technologies advance so quickly.
Web Search Aggregators?
And there were others, such as askjeeves.com (now just ask.com), that made use of advanced natural language processing algorithms to attempt to answer your question intelligently rather than simply pattern matching the fulltext of the web for your search term. Ask Jeeves was especially nice, because it felt more like you were able to intelligently ask a question and get a response, such as "Where can I find the exchange rate from dollars to GBP?" None of the other search engines really were able to do that.
One other site that stuck out was MetaCrawler.com. It didn't actually maintain an web index of its own, but instead, it would send out your search query to all the major search engines, rank the results, and then show an aggregated list of potential hits. Before I started using Google, MetaCrawler was my default search engine for pretty much any query (with askjeeves.com being a close second for things that could be formed into a question easily).
It appears that MetaCrawler is still around, still searching, as are most of the other big players. But does anyone actually use them anymore, with Google as the golden standard of web search? Will some new player enter the field with some amazing new technology that could challenge Google's dominance? I'm looking forward to that day, even if it simply means that it will force Google to innovate once again.
Xfire
Amazon's Mechical Turk
An article on Salon.com discusses the mturk service. Opponents have called it a "virtual sweatshop". Rebecca Smith, a lawyer for the National Employment Law Project, sees mturk as
"...just another scheme by companies to classify workers as independent contractors to avoid paying them minimum wage and overtime, complying with non-discrimination laws, and being forced to carry unemployment insurance and workers compensation."
Probably the most well known example I found was an art project "The Sheep Market". As part of his master's thesis Aaron Koblin submitted a request for 10,000 drawings of sheep facing left at $0.02 a piece.
To test out mturk I drew several pictures for collaborativedrawing.com's next art project. I earned $0.04 in Amazon credit.
Semantic Web and Natural Language Processing
tagging to improve image search
Google Image Search isn't the only situation in which user generated tags can improve image search. In a project described here, several prototypes were developed and tested for museums to gather tags to improve access to their online art collections.
Some museums have already implemented similar systems. The Cleveland Museum of Art's online collection makes it easy to tag images by placing a button labeled "Help others find me" next to the image of the art piece (see example). Unfortunately, there doesn't seem to be a way to view the tags that other users have submitted.
While these concepts aren't new, I think that we're going to start seeing more and more websites using user-generated content to improve search.
Yahoo stops innovating?
AbsentEye
Saturday, February 16, 2008
Web 2.0 is driving the IT industry
As people keep using different platforms, they will want to use more and more of Java code. This seems a fairly clever move to me. Also, now they have MySQL, so all PHP-MySQL fans also have to go to Sun.
CEO Jonathan Schwartz's statement - "but I prefer to focus on acquiring new customers, not on the competition" confirms the fact that they also want more users to use their products and eventually become their customers some day.
Amazon Web Services went down
"Nobody is going to trust their business to cloud computing unless it is more reliable than the data-center computing that is the current norm. So many Websites now rely on Amazon’s S3 storage service and, increasingly, on its EC2 compute cloud as well, that an outage takes down a lot of sites, or at least takes down some of their functionality. Cloud computing needs to be 99.999 percent reliable if Amazon and others want it to become more widely adopted."
This raises an issue related with usage of storage services like S3, one should never build architecture that requires high availability access of such a service. Further, it's a warning signal to such storage service providers to incorporate architecture with no single point of failure.
Friday, February 15, 2008
Yuwie
The Semantic Web - Not Quite Machines Only
UMBC's Swoogle is a semantic web search engine designed to retrieve semantic web documents for queried terms. Yet even if a robot could go to Swoogle and retrieve a set of semantic web documents how does it know which one has the best definition for the term it needs?
I've been working on a way to solve this problem using standard AI state space search algorithms to find the best definition for a term based upon terms it occurs with. Since you need definitions for all terms you should try to find ontologies which define as many of your terms as possible that way you can cover the terms with the fewest semantic web documents.
Automatically Tagging Information
Thursday, February 14, 2008
Folksonomy
We need to Standardize License Agreements
In class Dr. Chen expressed the need of having information in a machine readable format. For this purpose we mainly use standards like FOAF, OWL etc. Companies are supporting development of such standards because its in their interest to have all data on Web in machine readable format. They can make money out of it.
Similarly, why shouldn't we have a standard for license agreements and privacy policies? E.g. a site like Orkut or MySpace should publish usage and license information in some standard format.
E.g.
<company > Google </company >
<application > Orkut </application >
<profilemode > public </profilemode >
<informationlifetime >forever:) </informationlifetime >
<othertags>value </othertags>
This way, I will be able to decide whether I want to join a particular site or not. I will use some automated "Software Lawyer" that tells me the risk associated in joining a particular site.
But I am sure that these companies won't have such a thing because they all want to use our information in abnormal ways and they want to hide this fact using those 10 page license agreements.
What do you guys think? Or does there exist something similar to what I just proposed?
Wednesday, February 13, 2008
Social Wallpapering?
I would consider "social web" to be more of a "Web 2.0" label. Forums, communities, and sharing sites of "Web 1.0" are by no means out dated now. When I think of a forum or sharing site from "Web 1.0" I think more of the term "community" and when I think of Facebook, MySpace or Flickr I think "Web 2.0" and "Social Web."
So when I discovered "Social Wallpapering" I got confused. It seems that it will be a wallpaper like
stock.xchng and Caedes. Despite the logo, Ajax buttons, and underdeveloped state, I refuse to call this social web. Its also hard to take this site seriously since currently the wallpapers are going up with no guarantee that the submitter has the rights to that artwork, mainly becuase this "community" site doesn't even have the user registration set up yet.
Tuesday, February 12, 2008
Facebook = Blackhole?
Is Facebook A Black Hole For Personal Info? Apparently if you delete your profile, your profile not only still exists but is accessible to outside users. A dude tried to delete his Facebook profile, found out the information was still available on Facebook servers and started an email campaign to have Facebook remove his information. Facebook emailed him back saying that they deleted his account. Well almost... a reporter later found his empty profile and was still able to contact him through the network.
Industry and Academia Care about Social Networks
Academia has spent a great deal of effort studying the social web. Infact, by comparing the results from google scholar to other more traditional research areas of computer science we can see that social networks are a strong field of study. Listed inorder are the queries and their result counts: first comes algorithms with 4.2 million, then social network with 2.5 million, followed by machine learning with 1.9 million, and 1.8 million for natural language processing,and artificial intelligence with 1.3 million, and finally the semantic web with .3 million.
Monday, February 11, 2008
Computer Books Riding the Web Revolution
Open Source + Java + Web = Good
The open source community (and the java open source community in particular) has a very good track record of supporting the web as a whole, from its HTML Parser to its JDOM Parser, to its Apache Tomcat web server all the way back to the Apache Httpd web server, which is still the most popular web server in the world. Truly, java and open source have done wonders both for the web and with the web.
As Paul's choice and history shows, open source on the web is a smart way to develop.
Mobile Social Networks
Sunday, February 10, 2008
Open Source + Social Web Applications
In searching for the best software to use, I came across two products, Movable Type and WordPress. I was initially drawn to Movable Type, but after playing around with it a bit and reading up on the licensing, I chose WordPress. Currently to use MT in an Enterprise environment, you need to purchase a license for the premium version. Now, they have a completely Open Source version that they released somewhat recently, but it seems to be stuck in a perpetual beta state, and I wouldn't feel comfortable using it in our environment because of that.
WordPress is pretty nice though, I was able to quickly and easily integrate it into our LDAP authentication database to enable single-sign-on for all our users, and there are thousands of plug-ins out there that enable all sorts of functionality.
In this case, the more open-source solution won out over the proprietary solution since we don't really have a budget allocated to purchase a commercial software license for blogging software. At the same time though, Movable Type had a few features that WordPress doesn't seem to (at least without finding a plug-in to perform the task).
So, which one is the right business model? Do you release something for free for personal use but then restrict your license so that if you are a commercial entity, you need to purchase a license? (a-la Movable Type, the QT graphical toolkit) Or do you just license your code under the GPL and then sell support contracs to those who want or need them (a-la the MySQL DBMS), hoping that people will pay? I'm not sure... My gut feeling is leaning towards the second, simply because in the end more people will be using your product, meaning hopefully more people will end up paying for support. But at the same time if people don't have to pay, will they?
OpenSocial: Pros and Cons
...There's a lot to be said for being "below the radar" when you're a marginalized person wanting to make change. Activists in repressive regimes always network below the radar before trying to go public en masse. I'm not looking forward to a world where their networking activities are exposed before they reach critical mass. Social technologies are super good for activists, but not if activists are going to constantly be exposed and have to figure out how to route around the innovators as well as the governments they are seeking to challenge...
Tim O'Reilly's blog describes an opposing view. In O'Reilly's opinion, opening up social graphs will eliminate the false sense of security many social network users have. Security-through-obscurity is never a good thing, and by showing people how their publicly posted data can be used, it's hoped that they learn to better protect their information.
Another blogger on privacy blog makes the point that
Relationship information is not the property of individuals - it held in joint custody among all parties in a relationship...
If someone that I've 'friended' wants to somehow use a social network (that I'm a part of) do they need to get my permission?
Open Social Web's Bill of Rights is a good start, but obviously this has no real legal weight and depends on companies voluntarily following it. These issues should be dealt with now, while the technologies are still being developed.
Code-a-thons
It seems some people are taking rapid development quite seriously. ReadWriteWeb has an interesting article about weekend code-a-thons. Rapid development retreats where teams work either with each other or against each other to develop web applications. The teams work to go from design to launch in just two days.
If I had a weekend to spare I would love to try this out. I think it would be an excellent weekend to learn and meet with other like-minded people as well. I imagine it would be like trying to do our group projects in a weekend.
New video API
Saturday, February 09, 2008
RSS and Data Signaling
Launch of Web 2.0 Security Forum
Here is the article.
Security is a very important component of Web 2.0 considering the amount of data that is shared and number of users participating in creation of these data. It is a promising step taken by these companies that will only in help the growth Web 2.0 users and applications.
Though I am not sure if it is going to be significantly different than OWASP.
Data Storage
"The lack of sufficient data storage capacity has legal compliance implications for companies facing data security, privacy protection, record keeping, data retention and other data usage requirements of e-discovery rules for litigation, the Sarbanes-Oxley Act, Health Insurance Portability Act and other laws, the report said." [Privacy Law Watch]
Social web companies have to think about both aspects: offering space to their users for their content and any data laws that may apply to the company itself. Data storage is a critical part of social web technology and I never gave it much thought before last night.
Issues with Social Graph API usage
using specialized spamming software such as FriendBot/BuddyBot which are actually automated friend adders or the tools that posts comments/notes to multiple users. Such tools use the sites' search tools to reach a certain section of the users and communicate with them from a fake account. Now with the Social graphs, it would be easier for such bot tools to retrieve number of such related users.
Further, Social graphs api can be used as a tool by social engineering hackers,
to earn the undeserved trust by creating and exposing the the network of
weak social connections. This can be exploited further to carry out
phishing attacks.
The Facebook API
Here is a PHP Facebook tutorial, a Python Facebook tutorial and even a Java Facebook tutorial. I also found some helpful information on demystifying the application form.
To briefly summarize, it seems that you can write your Facebook application in whatever language you want. Your application will still needs to use the Facebook API as well as some FBML tags but developers are not limited to just FBML. This is because of the architecture Facebook uses to run the applications. A developer hosts their code on an external server and whenever a user makes a request of that application, Facebook makes a request of the code running on the external server. Facebook then can display the application.
Some pretty interesting stuff and certainly some good starting points for those of you interested in creating Facebook applications.
Personalized social search
defined social search as "...any search aided by a social interaction or
a social connection… Social search happens every day. When you ask a friend
what movies are good to go see? or where should we go to dinner?,
you are doing a verbal social search. You’re trying to leverage that social
connection to try and get a piece of information that would be better than
what you’d come up with on your own."
she mentions social search has not shown its potential yet. Google tried to
implement social search by providing users, facility to annotate the search results
and allow those annotations be shared with people of similar interest.
They tried it in Google Co-Op., but the model didn't work very well.
Google is also carrying out an experiment to let users vote on search results.
The critical thing would be to make use of the search results on the similar topic
carried out by other users that have same interests. To retrieve such users,
it would make sense if Google utilizes the users' connections from their friends-list
at FaceBook or MySpace where one can get relevant social context. Further,
development of social graphs would be a noteworthy step.
So, in future, it won't surprising to see Google's PageRank influenced by
the connections on social networking sites and provide more personalized
search results.
Blogs: David vs Goliath
Friday, February 08, 2008
Data on social networking sites
FOAF Consolidation and Editing
Thursday, February 07, 2008
Directory of Web 2.0 sites
Either the search facility is not working properly or the sites that I think are Web 2.0 sites are in fact not Web 2.0 sites.
E.g. when I searched live I was expecting live.com to be listed.
Search the tag Microsoft, you get Facebook in the result.
May be someone used wrong tags intentionally.
$1,100,000 Social Networking domain
A Not So Small World
This article points out many of the flaws with experiments designed to test the small world hypothesis, including unrepresentative samples, and a low success rates within experiments. Its an interesting read on an idea I had largely accepted and adapted to without proof.
We may not be 6 degrees apart
The experiment gave envelopes to people in Kansas with the name of a target person and several details about that person's life. Participants were asked to pass the envelope on to someone they knew who could get it closer to the target. Some of the stories sound pretty amazing:
"...an envelope that made its way from a wheat farmer in Kansas to the target, a divinity student’s wife in Cambridge, Massachusetts, with just two connections."
This recent article from Discover Magazine discusses some of the limitations of Milgram's experiment. The studies had a completion rate of only 5-30%. Judith Kleinfeld's paper, "Could it be a big world after all?", describes how after extensive research Kleinfeld found only two replications of Milgram's work, which is pretty low for something so universally accepted as truth. Another draw back with the chain-mail approach of the original is that because participants can't see the entire network, they may inadvertently send the letter further away from the target.
With the popularity of social networking sites comes the opportunities to further research the idea of six-degrees of separation. Many of the drawbacks of Milgram's experiment could be eliminated. Researchers looking at the data from social networking sites can see the entire social network and find the shortest path. Completion rates aren't an issue if computers are tracing the connections. One example, LiveJournal Connect, tries to find a path between two LiveJournal users. You could probably get some pretty interesting results with something similar on Facebook.
Security? We'll get to that later.
So why do we see security and privacy problems popping up in the Web 2.0 world?
One possible explanation is that if you want to share things with others (and Web 2.0 is all about sharing), then you must sacrifice a certain degree of privacy. Also, if your code is open-source, then hackers will find exploits by merely looking at your code.
But there's another explanation: this article suggests that developers are just too eager to implement new features, and security gets neglected. This phenomenon is not new: the same thing was going on when new and exciting desktop apps were coming up, the philosophy being that the priority is shiny new features for the user, and a stable and secure back-end is merely an afterthought: something that would be nice to have, but no one will notice if it's not there.
FOAF network visualization
Why People Join Social Networks
I Divorce U
Wednesday, February 06, 2008
Simple news to map mashup
Tagging leading to semantic web
What gnizr is not letting me do (yet) is to record more explicit information- who authored a document, when it was published, what institutions a person or a document is affiliated with, and so on. I think this is, in large part, the promise of employing semantic tags. When we tag a bookmark (in my case, a document), we are asserting that (at least to ourselves) the tag word is related to the document. That's about it. When we tag a document with a geoname, we assert something a bit more specific- that the location is related to the document. With this information alone, we can create a semantic graph whose nodes are the tag words, documents and locations, and whose edges are the relations we implicitly create when tagging. We can take this one step farther and imagine that one tag may be related to another if they are both used to tag the same document. The tags are at least related by that common document. The aggregate tag relations semantic graph can be used as the basis of a tag recommendation system (one of our suggested projects I'm interested in).
I think that by using additional machine tags based on FOAF relations or other standard metadata (e.g. Dublin Core) users could encode more explicit knowledge, (Bob is the author of document x), and thus richer semantic graphs. I'm suggesting that a semantic social bookmarking / tagging system could provide an easy and effective user interface for generating semantic graphs. I think the idea requires further elaboration and refinement, but I'm optimistic that a tighter integration of gnizr and additional semantic graph interaction tools could provide a nice path toward a really useful semantic web platform.
Typically by the time I think of an idea it's become passé- this case is probably no different. So I Googled for "social bookmarking semantic graph" and found a noteworthy blog post that explores extracting semantic relations from Del.icio.us tags using tag co-occurrence and frequency. There you go... at least one of the key ideas has already been floated! I'd better get coding!
A Social Network For Everyone!
In reading "Social Network Sites: Definition, History, and Scholarship" I ran over a single mention of Ning which is a platform to create a social network. There are already 8,009 social networks on Ning and the Popular Networks page shows networks ranging from Horse Aficionados to Disc Golfers to school affiliation networks. A cool feature of that page is that there is a tag cloud linking to tagged networks. Although clicking on 'food' makes me wonder why 1Club.FM Radio Portal is tagged food...
Anyway, I suppose it was a natural evolution with the popularity of social networks someone would build a platform to create new social networks on.
The Machine is Us/ing Us
This video was made by Professor Wesch, a Cultural Anthropologist at Kansas State University and his point of view is fairly unique compared to many other experts because of his background.
The video can be found at: http://www.youtube.com/watch?v=NLlGopyXT_g
Professor Wesch's page is: http://www.ksu.edu/sasw/anthro/wesch.htm
Who (doesn't) need Newspapers?
Who needs (hard-copy) Newspapers?
I don't.
When I've got a computer handy, I can navigate to google.com, whose iGoogle home page/news aggregator displays all my favorite RSS feeds.
On-the-go, I can whip out my Palm Treo 700p and navigate to feedm8, a handy mobile feed aggregator that gives me access to the same content, simply in a more accessible format. As long as I have cell coverage, I am good. But this requires paying for an unlimited data plan on my phone, which can be expensive.
But what about people who aren't Internet-savvy? How about those who don't have fancy smartphones that can display newsfeeds, or can't afford a cell phone plan that supports data? Or what if a person wants to simply sit outside in bright sunlight and read w/out squinting (many computer and smartphone screens are somewhat unreadable in bright sunlight).
For those people, a paper newspaper is still probably the only viable option for getting news on-the-go, or even at home. Many people in this country and around the world still don't even own a computer, or don't have reliable Internet access.
I would argue that until there exists a device that is completely portable, has a large screen that is easily readable (and functions well in direct sunlight), has a very good battery life, and has ubiquitous access to Internet newsfeeds wirelessly from anywhere, and the cost of such a device and the wireless service are close to negligible, will paper newspapers be obsoleted. Also, most of the population would need to have one of these devices, and know how to use it.
I believe that Amazon's Kindle product (www.amazon.com/kindle) is a great evolutionary step. Its screen is beautiful (and works great in the sun), its battery life is impressive and it's ability to access the Internet at broadband speeds from anywhere in the US (that can pick up a Sprint cell phone tower) w/out requiring a wireless data subscription is key. But, due to the nature of the device, Amazon had to lock things down so that it can only browse to a limited set of web sites, including amazon.com (to download eBooks and purchase things) and Wikipedia. Also, it costs $399. Many people can't afford that. And it only works in the US (so far) But it does offer instant access to many newspapers and blogs. From the Kindle home page:
- Top U.S. newspapers including The New York Times, Wall Street Journal, and Washington Post; top magazines including TIME, Atlantic Monthly, and Forbes—all auto-delivered wirelessly.
- Top international newspapers from France, Germany, and Ireland; Le Monde, Frankfurter Allgemeine, and The Irish Times—all auto-delivered wirelessly.
- More than 250 top blogs from the worlds of business, technology, sports, entertainment, and politics, including BoingBoing, Slashdot, TechCrunch, ESPN's Bill Simmons, The Onion, Michelle Malkin, and The Huffington Post—all updated wirelessly throughout the day.
Resurrected: The Industry Standard 2.0
Today, The Standard is re-branding itself under the social networking banner as an online-only publication which consists of a concoction of imported feed stories from around the net, well-known industry writers, freelancing journalism and analysis, and community contributions. The social networking aspect comes in the form of a Prediction System wherein members will have the ability to bet on the outcome a any given story (with virtual money). The prediction is a percentage based on who votes how much each way; wagering more on a story affects the prediction.
But is this just more of the same? One could view the prediction system to be a remodeled digg-like voting system and that we don't need yet another site to tell us about the Microsoft bid. This may end up being just another cliché site which will gather a core audience of contributors. Now if they add in real-money betting... that would be interesting.
Check out the new Industry Standard and try it for yourself.
Tuesday, February 05, 2008
Who needs Newspapers?
The presence of online news feeds, local and global bloggers, and other web 2.0 technologies provides me with more up-to-date news than any printed paper could hope to accomplish. Furthermore, I can see so many varying points of view for any given event that I can form a more balanced view of the news.
Take the recent story about Farc Protests for example. In this piece they say that the original protest started from Facebook:
"The protest was started less than a month ago on the social networking website Facebook by a 33-year-old engineer, Oscar Morales, from his home in Barranquilla on Colombia's Caribbean coast.
Over 250,000 Facebook users signed on, and the movement was taken up by newspapers and radio and television stations across the country. "
And if you had any doubt about the veracity of the story, or wanted to see if it had an spin on it, you could just check Google News with the right query words. Then you would have noticed that many nations in addition to Colombia were protesting against FARC and that the protest did spring from FaceBook.
Simply put, why would I want to subscribe to a newspaper nowadays?
Monday, February 04, 2008
SAT scores and books...
Thin Versus Fat Clients
A thin client is a bare-bones computer that relies on a central server for data processing. Desktops, or fat clients, perform data processing locally. For a business the benefits of a thin client versus a fat client setup are quite numerous. Primarily, a thin client setup is cheaper to use since thin clients require cheaper hardware, less power and, due to all processing occurring on a central server, thin clients require less administrative attention.
In an office using thin clients an application update occurs in one place: on the server, and all of the client users are upgraded with out any hassle at all. If a thin client fails it is inexpensive enough to simply throw away and replace with a new one. The user simply logs into the system on the replacement client and can resume work.
As we discussed last class, applications like Google Docs are already helping thin clients become a reality. If all a user is doing is loading up an internet browser, is there a reason for a powerful desktop computer? If all you are doing is updating a spreadsheet or working with email, probably not. I do not believe that desktop computers will ever become obsolete as there will always be a need for processor intensive activities, but I do believe that desktop computers will become a lot less common, at least as far as businesses are concerned.
Barack Obama meets Web 2.0
Microsoft + Yahoo : A Web 2.0 View
Sunday, February 03, 2008
Long tail theory and web 2.0
Copying web 2.0 site is not always successful
Please follow the link below for details Is eBay Bailing out of China?
Google releases social graph API
acquaintance, friend, met, co-worker, colleague, co-resident, neighbor, child, parent, sibling, spouse, kin, muse, crush, date, sweetheart.
But, when interpreting any social website, there is still the problem of time dependent relevance of such links.
Apple and Web 2.0
Politics and Web 2.0
Mobile Web 2.0
for the way we interact with the mobile internet. For instance, Ajit Jaokar discusses a concept called "spatial messaging". What he describes as the ability for someone to take a picture of a location, attach some text or short comment, and then being able to attach the picture and text to that location. The goal being that when a friend passes through that picture and text would then appear on their mobile device. That being said, one wouldnt think applications like that would work well on stationary pcs. Do you think this means that mobile and stationary websites will remain distinct throughout web 2.0?
Saturday, February 02, 2008
Insurance companies and web privacy
In a lawsuit currently in New Jersey federal court, several families are suing Horizon Blue Cross/Blue Shield. The families are accusing Horizon of denying claims submitted for treating their children's anorexia and bulimia.
The judge in the case issued a court order requiring the families to turn over emails, diaries and any writings "shared with others, including entries on Web sites such as 'Facebook' or 'MySpace.'"
From the article:
Horizon claims that the children's online writings, as well as journal and diary entries, could shed light on the causes of the disorders, which determines the insurer's responsibility for payment. New Jersey law requires coverage of mental illness only if it is biologically based.
Horizon wants to use the kids' web postings to show that their eating disorders are caused by emotional problems and do not have a biological basis.
Imagine if insurance companies went back and read your blog postings and think about what they would be able to do with that information. They could see when you started complaining about symptoms of disease X. Even if you weren't diagnosed with disease X until after you signed up for insurance, couldn't the company use your own writings to prove that you had a pre-existing condition? As we discussed in class, insurance companies could look for people who post pictures of themselves drunk at parties or smoking and charge them higher premiums for being a "high risk".
Maybe you didn't post anything incriminating, but what about your friends? Could lifestyle factors that make you a higher risk be determined by analyzing who your friends are?
Friday, February 01, 2008
Republicans Hate Their Candidates
My system shows that republicans hate their candidates. First up is John McCain, followed by Fred Thompson, and Mike Huckabee. The only candidates that seem to be doing ok are Ron Paul and Rudy Giuliani. Since Rudy is gone and republicans don't take Ron Paul seriously they're stuck with a distasteful set of candidates.
Meanwhile, Hillary Clinton and Barack Obama both are doing quite well within their own parties.
Does 1 Web 1.0 company + 1 Web 1.0 Company = a Web 2.0 Company?
This isn't a look at a controversial social web business model, nor is it a look at a successful web 2.0 company. But I think it is huge news in the web business: Microsoft has offered to buy Yahoo. Both companies have been struggling lately and from my reading, it appears that Microsoft thinks a combined company will provide a strong competitor to Google in the online advertising business. I tend to like reading Paul Kedrosky's analysis on these topics and he says that it really may not make much of a difference. Combing two failing web 1.0 companies won't create a company able to compete against a web 2.0 company. In fact, in a later post he mentions that Google would be able to exploit this to grow its market share. As someone who has been through one corporate merger and will likely be going through another one in the next six months, productivity suffers greatly during mergers.
Privacy Issues In Social Web. A Good Real Example is Us!
Few days ago, Dr. Chen sent me an invitation to the Weekly Blogging Assignment spreadsheet. It looked kind of strange to me. Because it contained UMBC Capus IDs instead of just the email IDs. Later I was told that it was done for privacy reasons.
Well, it was a good decision to go this way. But maybe he should not have trusted the app he chose to host this file. Yes, Google docs could reveal some information anyway.
There are more than one ways to know which ID belongs to which user. First, when you open the document and open the "Discuss" pane on the right, it shows a color box in front of the user(s) who is currently editing the document. And the area on the document that this user is editing is also shown with the same color. Assuming that users work only in their respective rows, one can know which Campus ID belongs to which user.
Another more easy way to reveal this is by viewing the "Revisions" to the document. This is self explanatory.
Thats why I say, we need security first.
Another minor issue: I am able to see unpublished drafts some people have saved on blogger. Beware, others might steal your posts/ideas :)
Note:
If someone's IDs got revealed because of the picture above, or there are any serious implications please let me know and I will delete this post.