Archive for the ‘General Stuff’ Category

One Month Germany (or so)

All right, I got 10 minutes to write about 1 month (a bit less) in Germany.

Initially we’ve been shocked a bit, our arrival followed one really hot sunny day and then 5 or 6 rainy days. With rainy I mean, it poured, almost continuously. So that was a bummer.

What followed were a few really enjoyable days. Our container arrived a bit late and so we had time to kill and nothing to put together (meaning furniture). We met our parents, became members of the Zoo and really appreciated the excellent public transport. S-Bahn, U-Bahn, Tram and Bus took us wherever we wanted and the kids enjoyed that a lot, too.

We now finally got our container with the household, car is still being modified… the container truck driver from the UK was a bit a shock. Easily the most horrible person I have ever met in my life. He accused me of telling him to not bring his buddies to unload the truck, meaning he, my neighbor an myself did that job. When he left, we still had a rough 70 pakets unpacked in the house, so we spent a few days unpacking, sorting, putting together furniture. That was not too easy with the kids at home, but we made it.

Now we await our car, finally, and then we seek a normal, ordered life again from September on. Work, kids at kindergarten, etc. Call it normal, but for me and my wife it will be refreshing as we now spent camping (no furniture, no car) for more than 2 months. It’s time this is over :-)


It’s over :-)

It’s almost three years ago that my family and me arrived in the states and in 16 days we’ll fly back home to Munich, Bavaria, again. I worked for Yahoo! Inc. right in the center of the Silicon Valley and and their headquarters in Sunnyvale, CA. I’ve done backend Java development with a lot of Spring, a lot of web-based frontends, a lot of mashups and integration. I worked on admin scripts written in Groovy, on quick and dirty prototypes to showcase ideas, I fixed bugs. HTML5, CSS3, JavaScript, AJAX, Java, Groovy, Grails, Android, MySql, Tomcat, etc. are the buzzwords and technologies that come to my mind.

Times were sometimes challenging, sometimes boring, sometimes exciting… we went through a few interesting phases I guess. Yahoo! went through a lot of layoffs, right-sized it’s business and is just getting on track to kick ass again. Overall it was an awesome experience and I’d do it again, believe me!

I was not alone here, my family was with me. I think we *really* arrived here in California. Kids went to preschool and pre-kindergarten, my wife went to college, both me and my wife dialed 911 and I’ve been to an emergency room with one of the kids twice, including a funky ride in a paramedics van up to Stanford. I think we really lived here, we did not just work here. We lived here, relatives died back home and we could not visit that fast.

We visited the great places a lot of people dream of back in Munich. If they ever get to California, they’ll come during the summer months when there are way to many tourists everywhere. We enjoyed going to Lake Tahoe off-peak, snowboarding during the final days towards summer at Squaw Valley or visiting beaches around Santa Cruz that were totally empty and looked like pictures out of a travel magazine.

We had a great time, but we’re all ready to come to Munich, again, too. These days, a strange feeling surrounds us. It’s the feeling of finally coming home again, which is good and full of excitement, and the feeling of leaving this place that treated us well, which makes us a bit sad.

What I’ll personally miss most is the people here and the way they approach anything new. For an IT guy, being surrounded by people working at Yahoo!, Google, Twitter, Facebook and the tons of startups that exist here, too, is something you just have to miss. You can visit a different user group for whatever technology you’re interested any week – try that in Munich. Also, the Bay Area is a melting pot for all kind of cultures. This is probably one of the rare places multi-cultural integration really works. I’d say our minds were definitely broadened and are more willing to accept different ideas and cultures now that we lived here.

But we’re ready for Munich, too. As I am leaving Yahoo! and I am ready for something new, I am actively looking into Android and I am really fascinated by this amazing mobile platform. The picture on the left shows the Android riding a skateboard, which is also something we all look forward to: public transport :-) It might sound limiting to a lot of people in the US, but I can’t wait to hop on an S-Bahn that takes me downtown in about 10 minutes – downtown to Marienplatz, where I’ll likely enjoy a real Brezn and a real coffee in a real mug in a real cafe. We’ll leave the car in the garage all week and we’ll use it for trips over the weekend. All things we need for our daily life – groceries and other shops, kindergarten, public transport including munich airport – is reachable within 5-10 minutes by foot. I call that a good thing.

My wife will have the chance to begin working again, the kids will go to a German ‘kindergarten’ and we’ll pay a fraction of what we paid in the US for child care. Food will also be cheaper, but eating out will be more expensive. For electronics, some recent comparisons are not at all that bad, prices in Germany are roughly comparable to the US when it comes to the latest unlocked/non-contract Android phones for example.

Comparing all these things is really hard. For health care for example, we’ve always been treated well in the US. But that’s of course because Yahoo! has some excellent benefits. Knowing that a lot of Americans don’t have that makes me appreciate the ‘socialised medicine’ and the regulated health care system in Germany. Even though I’ll be paying more for health care, it feels good to know that nobody is left behind when it comes to these essential things in life.

16 days. I’m hacking some Android code whenever I have some time to get more experience, go to Android UG meetings and add a lot of people to my LinkedIn, Facebook and Twitter follow lists. With my family we’re driving around, visiting places we know and love. We meet friends, many for the last time. We watch Netflix Streaming, the single media service I’ll truly miss (ok, I’ll also miss Google Voice…) .

Bye bye California.

Oh, and I’m back in September for JavaOne 2010 :-)

Am I really doing a Project 365…? Oh, yes!

I am still shooting, 110 days straight now. Unbelievable, at least for me, but still shooting. For most of the days, it is a lot of fun, but there are also days which are crazy. If you know your shooting for the day is off 5 mins before, you start getting a bit angry, I tell you :-)  So far I was able to quickly find someone else and I just hope this luck will go on.

The most amazing thing is the networking. I know I mentioned this before, but it is just really amazing and it totally surprised me. I have the pleasure to meet one new person a day, that is a pretty damn good addition to my social network I would say.

Oh, and the Nullzeitgenerator-Blog wrote about my project, I am a VIP :-) I hope I can take a portrait of Helene back in Germany, too!

Still taking photos…

I am taking a portrait each day since 55 days. It has been an extremely rewarding experience, both from a technical photographic point of view and from an networking point of view (55 days = close to 55 people I spent quality time with, most of them newfriends). I begin to realize a Project 365 is only partially about taking pictures. Especially when doing portraits of others, I think the whole post-shooting process, backing up, managing multiple laptops and keeping a calendar, etc. is another challenge you have to master. Partially for myself to reflect on what I learned, I compiled this list of ‘experiences’ and tipps:

  • I promise my subjects all RAW files and JPEG exports. Initially I shot in JPEG and then used a memory stick to transfer the images onto their laptop. This is a really time -consuming process, as you first have to transfer the images onto your laptop, then to the memory stick and later again to their laptop. Sounds like just a few minutes each day, but it gets boring. Also when doing RAW, it means you have to visit them later on, but what if they work in different building?  So you chat and email to meet but all too often you just waste a lot of time delivering the images.
    Solution: burn DVDs. After importing the raw files and converting them all to JPEG using the preset white balance from camera, I just burn a DVD each day. I deposit the DVD on my desk for people to pick up or carry them around for a few days in case I see them often.
  • After you’re done with a shooting, you probably have a couple hundred RAW files. Conversion to JPEG is pretty time-consuming, but luckily you don’t have to watch and see it happening.
    Solution: I make a a trade off – while I could optimize each image for white balance, I just take the preset white balance. I take a new preset white balance before each shooting and just keep it for exporting to JPEG later. The results are good enough and I would not have the time otherwise. Exporting to JPEG is a background process using ufraw_batch – a unix command line tool which is quite fast. It runs in the background while I work or am in a meeting.
  • Managing the pipeline of shootings is also a crazy task. I am booked – in theory – till mid March now. I often ask my subjecs to refer me to others and that works amazingly well. It still happens some people drop spontaneously, what then?
    Solution: Having a calendar with names and email addresses of future shootings is critical.  I also remind people 3-4 times before each shooting so the appointment is really set in stone in their calendars. For the actual shooting, I check out poses on Flickr and sometimes print or scribble them down in a small notebook I carry. This helps to get an easy shot and leaves more room to explore each day.

Getting late… more the next days!

Project 365: The first 11 days

11 days have passed since I started Project 365 with my own special goal: focusing on portaits. I failed miserably on day one, but since then things went quite smoothly. I learned a couple of technical lessons, like choosing a higer ISO to be able to have high shutter speeds (better noisy than blurry…), finding great window spots (north facing is best) or choosing the right distance between your subject and the background. I think a lot of these skills really develop over time, so my recipe is really to continue doing Project 365 to become better and better.

Here’s a couple of things I learned from approaching people:

  • Create your own Moo cards, choose a prebuilt design that just looks nice and have a couple of cards at hand wherever you are. First thing I do whenever I found some interesting face: give him/her the card. Then talk.
  • Wear serious clothing… really. Especially at the beginning, when you might not be too confident in approaching people, it makes a difference.
  • Choose people that interest you. Seriously… you might be tempted to just ask anyone, but it will likely fail. You’ll know best how to talk to people that interest you. And there might be a special link between you and them.
  • Have your camera around your neck when you approach people. Not in your bag. No one will buy the ‘photographer’ otherwise.
  • To look for people, go to public places. A farmer market is a great place, the people that go there are open-minded and often way more interesting that at a Safeway. Plus, it’s outside… on a cloudy day, you have perfect light.

These are just a couple of things, I am sure I can add more soon.

groovytweets update 11

groovytweets v86

Link tracking and ranking

Finally! I have been busy checking out all kinds of things and I finally spent that other hour to finish one mingeling feature in groovytweets: Link tracking and ranking.

It is a notable feature, as the newsworthyness of tweets really lies in the links that people tweet. For groovytweets, of course, these are just the links with a groovy context and out of the groovy community that we trust.

One important aspect of link tracking that I noticed early on was that trackign the links itself does not make sense. Too many people follow the (and others) redirects, then convert the same target url into another short url and here we go: we have to links. So link tracking in groovytweets is based on the final destination a short link takes you to… groovytweets resolves links and follows the Location: headers till the end.

And as we are on twitter, the link count itself is not enough. Someone once said, that information older than 48hrs is practically useless for the Twitter-generation. The link ranking takes the freshness of a link into account and reduces the overall ranking score over time. Older links will automatically loose a lot of ranking points just because they are old, making space for the new rising ones.

On a side note: I recently have received another request to include search into groovytweets. I am looking into it, but things just would be way nicer if I had a relational DB ‘LIKE’ etc. On App-Engine, I have to build my own search index if I want to provide a fast solution and that’s where things can get (compared to a grails app with a relational db) unnecessarily complex. I am not totally sure if I am doing it at all, but I have some ideas in mind.

So good for now!

groovytweets update 10

It’s more than 2 months ago since I blogged about the groovytweets status, but there have been numerous minor updates and improvements. The friends list (the Twitter users we collect tweets from) has been expanded to ~422 followers (and likely more when you read this), the regular expressions used to decide if a tweet is ‘groovy’ has been adapted to the changing groovy universe (like gparallelizer renamed to gpars or following vmware news now), and so it goes on.

But the real meat is a bit behind the scenes. The features you’re likely to see quite early are language detection (filtering by language) and a new link ranking. I still have to improve the quality of language detection as tweets often use English terms even if the tweet itself would be written in a different language. Larger texts submitted to the Google Translation API of course yield better results; tweets just having 140 characters makes this a bit harder.

The groovy link ranking feature can already be seen live in an early version. I am now collecting the links and tracking their usage in tweets the same as retweets for tweet. The nice thing is that I am tracking the final URL, so if someone used to create a short version of a URL I am actually following the redirects to find the final destination. Next, I am prepared to limit the links shown in the UI to the last weeks (2 currently) and in addition the relevancy of the links degrades over time. This means a link from today having 5 mentions in the groovy community will eventually be higher ranked than a link from yesterday having 6 mentions, simply becuase time is an important factor for relevancy.



The real real change for groovytweets is yet to come though. As you might have heard, the new Twitter Retweet API is on it’s way. It has been changed multiple times now, based on a lot of user input flowing to Twitter and hopefully even mine. It will fundamentally change how Twitter aggregators/relevancy tools can count retweets. For now a Retweet was a community-agreed syntax, like RT @originaluser text. In groovytweets code I was analyzing each incoming Tweet to decide if it fits in one of the many retweet syntaxes and tried to find the original tweet, then tried to look that tweet up and increase the relevancy.

Well, now Twitter is making the Retweet an official concept of Twitter. They even give you a new API method to look up the total retweets of a tweet, which sounds great. The downside is that each Twitter account may currently use 150 API calls per hour. If I wanted to update 50 tweets displayed on the groovytweets homepage every minute, this means 50 Tweets * 60 Calls per hour = 3000 calls per hour. Well, I got 150. An that is not including the minutely check on new tweets coming from groovytweets friends. So: we’re in trouble here. One solution would be to get whitelisted for more API calls, but there is a better one (or two).

The one solution I still got some hope for is that Twitter will simply include a retweet count with each Tweet. The problem here, I guess, is that I am interested in the retweets within a specific community only. And providing the count only for *my* friends instead of a global retweet count (which is way less relavant some might argue) might potentially be a pretty resource intensive task for them.

The next and more likely solution involves using the Twitter Streaming API. The good thing about the API is that it will show retweets. Although the API just changed again, making the Retweet now the top element instead of the Tweet (and including a retweet_details element), it is then very easy to detect a Retweet. The bad news: Groovytweets is hosted on Google Appengine, and Appengine kills each request after about 30 seconds. So I invested some time finding a cheap vServer on which I open a permanent streaming connection to Twitter. I will then call an API over on groovytweets to feed the retweet information into the app. This splits the system into two parts, which I wanted to avoid, but it looks like the best solution.

Follow me @hansamann to get the news as it happens.