Mobile coding lessons i’ve learned from writing an Android application

I recently completed a major version of an Android client for work.  I guess no program is every really “complete”.  But, it’s at a good stopping point for the moment so i thought i would jot down some of the lessons i learned.  These can really apply to any type of development, not just mobile Android.

  • The network layer is unreliable
First of all, cell phones have flaky connections that come and go.  They might be on wi-fi, they might be on a cell connection. They might be in a tunnel.  Maybe they’re in airplane mode.  Doesn’t matter.  You can’t count on a network always being there, and even when it is there, you have to assume that it will be flaky and return strange errors.  It’s important to have a robust network layer that has retries built into it for various errors such as timeouts, unavailable, etc.  Don’t just assume the backend is down.  Assume first that their connection is temporarily out of sorts, retry a few times, and THEN give up.  That will eliminate 90% of the network errors a user has to be aware of.  Oh, and some phones, vendors, flavors of Android all have weird little network quirks that only they exhibit.  Yet another reason for a robust network abstraction.
  • Deal with threading issues

If you don’t want to die from a thousand little cuts, design your code to be thread-safe from the beginning.  Many phones now have multiple cores and really do execute things in parallel.  Add to that fact that any complex app will likely have multiple IO requests in flight at the same time (whether from the file system, database, or network) that can return asynchronously, and you are ripe for threading problems.  Deadlock, data switching out from underneath you, etc.  Use the threading tools. Synchronize, lock, atomic operations and especially the concurrent package are your friend.

  • Be asynchronous

Seriously.  Don’t block the UI thread for any reason.  If you’re loading a file from the file system, accessing the database, making a network call, or even doing some heavy number crunching, do it on a thread and call back to the UI when you’re done.  Android has some nice abstractions to help with this, including AsyncTask, and Handler classes.  You can also do traditional Runnable’s if you like.  But whatever method you use, make sure you have an async layer built into your app that’s easy to use.  Make sure your main application logic can deal with everything being asynchronous.  Your users will thank you for a snappy app.  Corollary: Don’t block the UI needlessly with modal dialogs unless you absolutely have to wait for a result.  Let the user do other things while your background processes are running.

  • Be aware of constrained resources

Ya this is a fun one.  You have very limited memory in which your application can run.  We’re not on a desktop here.  Some phones are far worse than others as far as how much memory they let you have.  Loading images is especially dangerous for running out of memory.  Have a strategy in place to only keep in memory what you need.  Don’t keep things lying around longer than you must.  Memory is fast, but disk is cheap and certainly faster than the network and definitely better to take a few milliseconds to reload an image from the drive all the time rather than running out of memory and crashing the program.  So load it from the network, save it to disk, cash in memory until you run out, then just have some type of LRU cache.

  • Understand and work with the application lifecycle

This one may be somewhat android specific, but the general idea is sound for anything:  Each activity (screen/window) has a lifecycle.  It’s created, setup, running, paused, and eventually destroyed.  It might also be resumed and restored in the middle of all that.  The framework provides rules about when all these things happen and how you should deal with them.  Make sure you do the correct thing at the correct time.  Expect that since this is a mobile environment with lots of stuff going on that your activity might be asked to pause or shutdown at any time, even mid-process.  Save off your state, be able to restore it, and know how to deal with data inconsistencies that might result.  Don’t expect that x then y then z will always happen.  X and Y might happen, and you’ve started Z, but then a phone call comes in and your activity goes away.  When you come back, what do you do?  Make sure you do the right thing.

the blinking cursor

Recently a friend of mine sent along an article i found quite fascinating. Normally i’d have read it and moved on. But this one really caught my attention. The basic synopsis is the author talking about a young kid in the 80’s who got his first computer and became a self-taught programmer. (Sounds a lot like me). Then the author goes on to state how he tried to become a self-taught programmer and failed miserably time and again. This article is his analysis of how he finally succeeded.

But the part that really resonated with me? It’s how he describes this young man getting into computers in the first place. It was like i was looking into a mirror while i read this.

When Colin Hughes was about eleven years old his parents brought home a rather strange toy. It wasn’t colorful or cartoonish; it didn’t seem to have any lasers or wheels or flashing lights; the box it came in was decorated, not with the bust of a supervillain or gleaming protagonist, but bulleted text and a picture of a QWERTY keyboard. … On the whole it looked like a pretty crappy gift for a young boy. But his parents insisted he take it for a spin … And so he did. And so, he says, “I was sucked into a hole from which I would never escape.”

It’s not hard to see why. Although this was 1983, and the ORIC-1 had about the same raw computing power as a modern alarm clock, there was something oddly compelling about it. When you turned it on all you saw was the word “Ready,” and beneath that, a blinking cursor. It was an open invitation: type something, see what happens.

That’s also how my adventure began. When i got my first computer, i sat there for hours going through the old manual, making blips and bleeps, and little guys running across the screen. Humble beginnings – but hey – it paved the way for my eventual career in computer science.

The rest of the article is worth reading too. It talks about the author and how he came to learn the ins and outs of computer programming (and more generally – basic logical problem solving).

His first several attempts consisted of buying a big fat textbook – you know, one of those “teach yourself in 21 days”. The 1500 page dry boring texts that even _I_ can never get through – and i’ve been a programmer for decades now. Finally he discovered an online ‘teach yourself programming’ course that was put together by a now grown up Colin Hughes.

What’s interesting is how he goes about it. Learning doesn’t have consist of dry, boring, sterile sets of facts, rules, and procedures. It can be fun, engaging, interactive. Almost gamelike. The majority of the article talks about the procedure of making learning fun for the student so that they WANT to explore a little more, and then a little more, and then before you know it, they’ve mastered something along the way.

If you’d like to check it out, you can read it here


My mom sent along some pictures of our 2nd computer (she’s right – the first was an aquarius, soon followed by an Atari 130XE – which is where i really started to program).

The Grid

Sparked by the recent Tron movie, i started thinking about the hyper-evolved 1980’s environment which is “The Grid”.  Couple this with an interesting podcast discussion i was listening to about how different generations are interested in different things, and i find myself with something to post. :)

Think back to when you were in your teens.  What was the cool new thing at the time?  For my generation, it was home computers.  Sure, computers had been around for decades as big giant mainframes and house-sized computers in universities and government buildings.  But it wasn’t really until the early 80’s that they became accessible to the masses through the likes of Atari, Amiga, and Commodore.  They were magical things.  The world suddenly opened up to me.  I had this little box that i could control.  I could play pixelated games in 4 colors.  I could write papers and design ascii-art banners and send them to a dot matrix printer.  It made little bleep sounds.  And the best part?  I could write my own programs to do anything i could imagine (well … limited to the sparse programming materials i could find at the time).

The home computer was a wonder.  To my parents it was a little scary.  They didn’t quite know what to do with it.  They coped, but it’s never really been a core part of their lives.  Now let’s rewind a generation.  What’s the cool thing when my parents were growing up?  Televisions in every home?  They probably thought that was the coolest thing ever.  To me, a tv is just a tv.  It’s always been there.  No big deal.  I use it, i like it, but it doesn’t inspire me.

Rewind further.  Radio.  You can actually hear what someone is saying hundreds or perhaps even thousands of miles away.  At the same time as other people all around the country!  They’re talking TO YOU.  Telling funny stories, playing old time music.  But to me (and to my parents i’d imagine), it’s just a radio.  You use it, it’s there.  Certainly not awe-inspiring like it was to the generation when it first came out.  We can go further back, but i think you get the idea.

Let’s instead move forward a bit.  My kids.  They have computers.  All around them.  I’ve got phones that are far more powerful than any computer i had growing up.  My kids have them, they use them, they’re convenient.  But so what?  They’re just things.  They don’t inspire awe or imagination.  They are inspired by other things (although i haven’t quite figured out what it is yet.  Smart phones, music players, the internet, mmorpg’s, youtube, facebook, 3d movies)?

There is no “grid” for them.  Which is why Tron is probably just another movie to people from before or after my generation.  Sure, it’s got amazing special effects.  The soundtrack rocks.  But the concept of programs that look and act like us living inside of a virtual city?  To me, it was something cool to ponder and imagine.  Could it really happen?  To my kids … ehh.  They don’t have the context of wonder that i had back in the early 80’s when PC’s were just coming into their own and the grid was an exciting and revolutionary idea.  And it makes me a little sad.  And also a little curious and excited to see what the next revolutionary awe-inspiring thing will be.

Unit testing private java methods

Warning: dry tech article ahead.

As is often the case when beginning a new project, I like to get the ground rules set.  On a well done java project, unit testing is a given in my mind.  But one thing that’s always been a sticky point is how to unit test private methods and members.  In order to keep a clean class, you don’t want to change the scope of a method to public just so you can unit test it.  And you certainly wouldn’t want to put your unit tests inside your actual class.

Some other approaches include package scope or pass-through methods.  Again – this dirties your code just for the sake of a unit test.  After a few minutes pondering, i figured that reflection must be the answer.  And yes – there is a really nifty way to have your cake and eat it too.  Cleanly separate your unit tests from your code AND keep your private methods and members private.

But first, let’s hear the naysayers.  I was surprised to find that many people think you shouldn’t test private methods.  “If it’s private, it’s not part of the contract and the point of a unit test is to test the public contract of a class”.  I couldn’t disagree more.  Often times, business logic and important algorithms that only make sense to the class are encapsulated in private methods.  But in order for the “public contract” to work correctly, all of the underlying pieces must also work correctly.  Sure, you can indirectly test the private internals by assuming they work if you get the right output on a public call, but i just think it’s better to teach each of the individual cogs in the machine to make sure each piece does its job.  Then when you assemble it all together into a larger whole, everything just works.

Another counter argument is “if you test all these private methods, your test cases will be brittle”.  There’s a modicum of truth to that.  Chances are that if you stick to the public API’s, they will change far less frequently than an internal private method.  But i believe that if you properly design your code so that the private methods do one job and do it well, once you write your test, there isn’t much need to go back and ever mess with it again.  And if you do – be sure to update your unit test.

Ok, enough of the philosophy.  On to the technical details of how you actually accomplish this.  Let’s assume a class that has the following private member and method.

public class MyClass
    private int _myPrivateMemberVar;

    public MyClass() { ... }

    private String doSomethingInteresting(String s, List<Integer> li)
    { ... }

Now, how to access these from a unit test that’s not in the same class? Access it via reflection and change the access at runtime.

    //change the accessibility of the method and member
    MyClass c = new MyClass();
    Method method = MyClass.class.getDeclaredMethod(
        "doSomethingInteresting", String.class, List.class);
    Field field = MyClass.class.getDeclaredField("_myPrivateMemberVar");

    //call the method
    String myString = "foo";
    List<Integer> myList = new ArrayList<Integer>();
    String result = (String) method.invoke(c, myString, myList);

    //access the member
    int memberVal = (Integer) field.get(c);

And there you have it – accessing private methods and members at runtime from a unit test. Happy testing!

Why the java hate?

What is up with all the java haters out there?  I’ve been thinking about this for a while.  If you don’t care about techy posts, this will probably just bore you.

First off i’ll say that any language has its detractors.  No one language is perfect for everyone.  No one language is good at doing everything.  I think lots of languages have their places and lots of problems can be solved in different ways with different degrees of success by any number of languages.

But seriously – why is it that java is viewed in such a harsh light now a days?  in multiple job interviews and companies i’ve worked out, java is a hiss and a byword.

When i was going to college and java was the shiny new toy, two things happened.  First – everyone (including myself) went “uh, why would i want to do anything using java?  it’s just for making silly web animations, right?”  And at first, that’s mainly what people used it for.  Applets.  Those silly web animations.  But actually you could do much more.  You could write an entire fully functional program that ran in the browser if you wanted (and you could even access restricted resources if you asked nicely for the users permission).  Still, applets kind of sucked then, and they still suck today.  They’re slow, they are memory hogs, and they never work very “smoothly”.

Second, after the applet, everyone went “but what about for create cross-platform GUI’s?  Swing to the rescue, right?”  I did that too.  I created several large enterprise applications back in the 90’s and even early 2k’s using swing.  And guess what – swing is slow.  It sucks.  It doesn’t look “smooth”.  It doesn’t behave like the native GUI apps.  So again – can’t complain when anyone says java isn’t good at making gui apps.

So what’s left?  Well, there’s the server side.  And this is where i think java absolute rocks and why i don’t get why people think it just totally sucks all around.  It’s got all the features you could want to do anything.  It’s a simple language, syntactically speaking.  You can come up to speed on it much more quickly than you can with c/c++.   If you really need to drop low for some serious speed in a critical section, use JNI and call a c function.

But here’s where i think people get the wrong impression.  And i can’t believe i’m going to say this, but “kids nowadays use java as a crutch”.  Yes, i think that’s probably true.  They don’t start out learning the low level constructs and theory behind programming languages.  They don’t understand what makes the programs tick.  They just fire up a java editor and start writing code.  There’s so much detail that’s hidden from a java programmer that it’s easy to see why someone more “hard core” might poo-poo someone who’s main competency is java.

I’ll tell you though – not all java programmers are like that.  There’s quite a few of us who DID do c/c++ in school (and even in our professional careers).  We learned the theory.  We know why compilers and languages do what they do.  And you know what – i’m damn glad there’s a language like Java that hides most of the crap from me.  It lets me focus much more on solving the problem at hand and writing the app.  Whenever i dive into c, i spend more time worrying about the syntax of the language and the memory management and the pointer arithmetic than i do about the algorithm.  My productivity is cut down by 50% or more.  THAT’s why java rocks.

But yes – you can write some really shitty code using java if you don’t understand why things work they way they do.  So to all you java haters out there – make sure you give a java guy a fair shake.  They might surprise you and actually be able to write some seriously good stuff with the language.  And for all you java weenies out there that don’t understand the guts of why things are the way they are – figure it out.  Take a class, read a book, pick someone’s brain.  Find out the lower level details so that you’re aware of and can make use of that information.  It’ll help you write better code and avoid a lot of issues that java helps you to gloss over.  And you’ll be a lot more marketable as a result.