DI => WDI

Posted on January 24, 2014

Today is bittersweet! It is my last day on the Disney.com Platforms team at Disney Interactive.

On Monday, I’ll be moving down the street to join Walt Disney Imagineering: Research & Development as a “Developer, Web Collaborative Platforms – R&D Imagineer.”

To my fine, formidable #MATTERHORN teammates… thank you.

As for my new gig, needless to say, I’m pretty stoked.

Joining The Mothership

Posted on December 15, 2011

After 2.5 years, hundreds of days at the Disney theme parks, tens of thousands of lines of code, I’m hanging up my hat at TouringPlans.com to join the Walt Disney Company in Burbank. Specifically, I’ll be working for a new group called DIMG Innovation (DIMG == Disney Interactive Media Group) as a Senior Software Engineer. Innovation is working on some really killer web and mobile web projects that touch various divisions of the company, and I’ve been super impressed with the folks I’ve gotten to meet so far. I’m starting on January 3rd, 2012, and I’m eager to get to work.

Amongst my family and non-Disney community friends, I’ve always had an uncommon fire for The Walt Disney Company. My family has stomped through WDW since I was a kid, and I’ve dragged Kate, my wife, to Disneyland regularly since we moved to California (she now sometimes drags me). Over the past two and a half years at TouringPlans.com, I’ve been lucky to have some really amazing experiences at the Disney theme parks, both as a guest and a researcher. Through the Disney community of fans, I’ve met a ton of fantastic fellow fans and cast members who’ve made the whole experience of working in the third-party Disney ecosystem a real pleasure. After Kate and I got married this summer (woo!), we both started looking forward a bit. When this opportunity came to move south and see what the “inside” was really like — not to mention to join such a solid team — we decided to jump on it. Needless to say, I’m pumped for the chance.

Sadly, this means leaving not only an incredible group of people, but also a product I poured a ton of energy into and something I think has a huge amount of room to grow. I can’t express how grateful I am to my boss, Len, for taking a chance on me and giving me a massive degree of independence to get things done. I can’t thank Len and co-worker Fred enough for letting me join their quest towards making vacations as easy as possible, nor can I properly describe how fortunate I’ve been to work amongst the talented people we recruited to the cause. To the entire team: I’m extremely proud of all the things we built and the growth we achieved thus far, and I will tell grand tales of the adventures we’ve had. I will miss you guys heaps, and I hope to see you all out west soon.

So if you’re coming to SoCal (is that an appropriate label?), and find yourself in the Burbank-area, please give me a ring. Until then… see you in the parks.

Twitter, With Lists, Now More Like Facebook

Posted on October 29, 2009

Just played around with the new Twitter Lists. Some notes….

-You can follow people on lists but not actually follow them. I’ve stopped following celebs and companies and just added them to lists. I don’t need their tweets coming in the main stream, but I’ll now check them out via secondary streams every once and a while. These are now essentially Facebook “fan pages” for me.

-My “followers” list is now looking more like my Facebook friends list more than ever.

-Nearly all of my lists are private. I think most people will just keep them public, but I’m feeling more comfortable with them private for now. I have however created a Clown list (my college ultimate team) and a a RoboCup list (my other collegiate team) for public record. Essentially, it’s like a private Facebook group page with less meta data + the stream (Facebook could easily add the news feed to groups as well).

-Leonard Speiser had a good point that these new lists would be a perfect for a My Yahoo! / NetVibes / iGoogle type layout.

Overall, a very big change in the way the service works for me (I think, I might just never check the lists). Really looking forward to seeing the Geo/Retweet stuff next.

Why Facebook Should Revive Beacon And Build A Product Search Engine

Posted on September 25, 2009

Continuing in my series of armchair “oh man Facebook should totally do X” posts, I gotta say, it’s a sad to see Beacon die. Facebook could have used it to build one heck of a product search engine.

Facebook’s biggest problem is focus. When you google for a tennis racket, odds are you want a buy a tennis racket. When you’re on Facebook, you’re seeing what your peeps are up to. Targeted ads are great, but despite doing it very well, Facebook’s revenues are not coming anywhere close to Google’s.

So here’s my theory: Facebook needs to bring back Beacon and build a search engine with it. The goal being that FB should be the first place (before Google) you go to when you want to buy something online.

I think it starts with Beacon resuming collection of data passively with permission from users–not publishing anything in the stream–from as many online vendors as possible. Sure, there are some privacy issues (understatement), but I think they could get around them.

The trick to making people feel normal about FB having this data is to anonymize the purchasers. Say I want to buy a LCD monitor. I search within FB and see monitors with price comparisons (like most product search engines), but I also see that people within my network have made purchases. If they are close enough in my network, Facebook will facilitate connecting me with them. And if the purchaser isn’t my girlfriend who just bought me one for my birthday, the purchaser (who is a friend or friend of a friend) can agree to chat about the product. If the purchaser doesn’t want to talk about it the product (maybe it’s embarrassing or they don’t have the time), I’ll never know who they were.

For the vast majority of cases, this is awesome; you gain the power of friendly knowledge. A few months ago, I recommended a LCD monitor I had just purchased to Hendrickson. I did about 3 hours of research to buy my monitor; he decided to buy the same one within two minutes.

Now there are obvious problems here, the primary being that, even within my extended network, no one may have bought a LCD monitor recently. This is when the power of anonymized collective intelligence comes into play. FB could list LCD monitors by popularity across the entire FB network, by popularity by region and by age groups (e.g. “which monitor is popular for people like me”). They could also do trend analysis to see products that are in vogue (sales of monitor X are accelerating), and the trends of pricing as well (Monitor Y has been dropping in price). Moreover, people might actually trust purchasing data on FB more than they trust the reviews and ratings on online vendor sites.

Also, I think you’d have more overlap on product searches than you’d think, especially since 1) friends typically buy similar things and 2) Facebook is being used to connect with not just immediate friends but with people in your professional and online communities (for me, I’m friending more and more people within the startup and Disney communities). Product decisions are often very relevant within communities (“which hosting company should I use for my startup,” “which resort should I stay at in WDW”, etc).

Another issue would the response time from asking your friends. You’d have to take some concepts from the social search world (think Aardvark) plus maybe some incentives and game mechanics to make it work. It may turn out the social part of product search is just a novelty, and the collective intelligence is good enough.

The collision of product information, search, purchasing histories and the social graph could create something really interesting. Something a small startup can’t build (have to connect with all the vendors), and something that vendors don’t want to do themselves (though I do think Amazon+FB Connect would be hot).

Facebook & Geo-Location

Posted on August 19, 2009

Facebook needs to enter the geo-location / mobile social network game, and now is the time to do it. I wrote two comments today on M.G. Siegler’s (p)review of the new sweet-looking Facebook 3.0 iPhone App, and I felt this needed to be properly said.

Now, this app looks freaking awesome. Let me just say that. But I respectively disagree with M.G. one one thing: the most disappointing thing about the app is that it needs a map of where all my friends are (see image below, bottom-right). When I open up this app, it should ask me if I want to share my location. If I say yes, all of my friends on Facebook can see where I am. If I don’t, they don’t. My friends on Facebook are the people I want to share my location with, and this app should be the catalyst to share it (and THEN send out push notifications, rah!).

Here are five awesome reasons why Facebook should start to map mobile locations::

The audience is there and ready. FB and iPhone are both growing like crazy, and the app itself already has a huge audience: consistently in the top 25 free apps, over 100,000 ratings. And I believe M.G.’s word like gospel: this new app version will only make FB more popular on the iPhone.

Second, iPhone has geolocation that works. Not in the background mind you, but it doesn’t matter since people check the FB app so damn often. Everytime you open the app, it updates location (if you want).

Third, iPhone users are exactly the type of audience that first joined and popularized Facebook: affluent, techy early-adopters. And I tell you: if maps of friend’s current locations start showing up in Facebook news feeds, people will start freaking out. OK, some may not so positive, but I’m damn skippy that a lot of them will be. My point is that FB doesn’t need a viral channel to promote this: just stick it in people’s news feeds, and it will grow.

Fourth, big web companies, of which Facebook is quickly becoming, like to take baby steps. Take Google Reader, for example. Instead of just coming out with a kickass version 2 that blows people’s minds, it teases and annoys people with all these weird social features. Facebook seems to be doing this incremental feature thing as well; instead of taking Twitter head on and making all status updates public, they iterate toward to a more open and twitter-like service. Doing optional geo updates via their iPhone app should be a pretty comfortable step towards a location-aware social network.

Lastly, five is obvious: Facebook has the social graph AND the eyeballs; other services (Loopt, Google, Whrrl) may be able to tap into the graph via Facebook Connect, but they would have to do something ridiculous (and awesome) to get their traffic.

So why isn’t Facebook doing this? Maybe they don’t think that people want it yet. Or maybe there are too many privacy settings and legal issues to worry about. But I’ll leave it at this: it’ll be easier to take on the geo/mobile guys now, than later.

Mad props to Jason for the Photoshop wizardy

Nutz’s Wedding Photo Set

Posted on July 12, 2009

Here is a Flickr set of a weekend Kate and I spent in Bend, Oregon celebrating @afischer and @shollen‘s wedding.

How Towing Companies Have Changed With The Times

Posted on April 07, 2009

So my car nearly caught fire tonight, something with the starter refusing to turn itself off and messing up stuff internally. But this article is not about the drama (everything turned out ok).

What amazed me though was the speed, response, and ultimately how smooth the towing went. And it’s largely because of tech.

Think about how towing happened thirty years ago. First of all, you were screwed if you weren’t near a phone, and that’s even if you could flag someone down to go and call a towing company and hopefully remember your vague location. Cell phones help _a lot_. Second, AAA is computerized now, so even though I called some dispatch in who knows where, they were able, after giving my location and AAA number, to dispatch a local towing company with relative ease. That company then called me and told me how far they were in route. Even then, since I was in downtown Palo Alto and my car wasn’t in an obvious place, I still had to flag him down somehow. Thanks to my cell phone, and his bluetooth setup, that was super easy; as I was running out of the office to find him, he was driving around and we could talk smoothly until we saw each other. When I caught a ride up with him in the cab, he didn’t have to intimately know the area to find my dealer: he had GPS. And some other GPS-like device that I believe was a queued dispatch list from AAA. Pretty slick.

Overall, the process worked. It’s also fun to think of the next few tech leaps that will be made. One, cars will just be more robust and less prone to breaking down so epically. And they’ll have better sensors and diagnostics to let you know of issues way earlier (think preventive care like in the health care industry). Further, with geo-locator units both in my car and in my phone, they could have found me even easier than they did, and not have to rely on error-prone directions from people who can be quite flustered when their car goes all crazy.

reCAPTCHA on Rails

Posted on March 08, 2009

This has probably been done to death, but for the sake of Random Google Searches (RGSs), here is a quick run through about how to do reCAPTCHA with Rails.

First off, you need to register at recaptcha.net. Then you need to add your domain and get two keys (one public, one private). I created two sites once registered, one for my development environment (localhost), and one for production.

Second, there is a Rails plugin. It’s on GitHub. So install it like this:

$ ./script/plugin install git://github.com/ambethia/recaptcha.git

Third, take those public and private reCAPTCHA keys and place those suckers in your environment.rb or appropriate environment file. Here’s what I added to the bottom of my development.rb, replacing the ‘MY_PUBLIC_KEY’ and ‘MY_PRIVATE_KEY’ with the keys from recaptcha.net (but keep the single quotes):


ENV['RECAPTCHA_PUBLIC_KEY'] = 'MY_PUBLIC_KEY'
ENV['RECAPTCHA_PRIVATE_KEY'] = 'MY_PRIVATE_KEY'

Fourth, find the place in your views where you want the reCAPTCHA box to appear. The plugin defines a special view helper named recaptcha_tags. Here’s a basic example:

Ask a Question -- Get an Answer!

<% form_for(@question) do |f| %>
Question: <%= f.text_field :question %>
Human Test: <%= recaptcha_tags %>
<%= f.submit "Ask Question" %>
<% end %>

And here’s what it that basic html looks like:

Each page load will embed code that pings the reCAPTCHA API and generates a new captcha. Note that there are some extra options on recaptcha_tags if you need to handle ssl or want to by default not use javascript (uses an iframe instead).

Lastly, we have to handle the verification in the controller. Here’s how I integrated it:


def create
@question = Question.new(params[:question])
if verify_recaptcha() and @question.save
redirect_to :action => 'show', :permalink => @question.permalink
else
render :action => 'new'
end
end

The verify_recaptcha() method will take params from the POST request, ping recaptcha.net, and then return true or false. Then you can handle it however you want (here in the same block as model validation). If the captcha fails, it’ll render the new page again, where a small error message will show up in the reCAPTCHA box.

And that’s about it. Hat tip to Rob Olson for implementing this originally on Elevator Pitches (I checked out that implementation first).

The Long Huck => hwork.org

Posted on March 06, 2009

Finally made the switch from Blogger to WordPress.

All links are now redirected. The feed is now redirected (and hosted on feedburner). I’ve been working with WordPress for over a year now, and so I finally thought I should switch to it for myself.

Twitter Domain Duplicate Content Issues

Posted on March 06, 2009

Found this issue when vanity searching for ‘hwork‘, my bowdoin handed-out internet alias. Turns out that twitter maintains three (at least) different subdomains for each user that are all crawlable by google: twitter.com/hwork, http://m.twitter.com/hwork, and explore.twitter.com/hwork. The first is obviously the main site (they redirect all requests from www.twitter.com/* to twitter.com), the second is the mobile site (which they redirect you to if you connect via a mobile user-agent). The third I don’t recognize. But it’s literally the exact same content as the normal domain (maybe some differences based on your session data).

Anyways, the solution for Twitter to fix this is really simple. Just disallow all URLs via robots.txt on the m.twitter.com and explore.twitter.com domains. http://m.twitter.com/robots.txt should look like:

User-agent: *
Disallow: /

This should remove a ton of duplicate twitter urls from goog. According to the SEO for Firefox plugin, Ev’s regular twitter page as a pagerank of 7, while the mobile version of his page only has a pagerank of 4. Interestingly, ev’s explore domain page also has a pagerank of 7.

Update: Found another dupe content domain, here: http://api.twitter.com/hwork.