Friday, August 19, 2011

Stop micromanaging - just set clear expectations

Prior to startup days I spent much of my time managing development teams - setting tasks, reviewing them, and all too often redoing them because the outcome wasn't what I'd expected. The problem wasn't so much that the task had been done poorly, but more that I had done a poor job of setting clear expectations.

I hate being micromanaged as much as the next person, so I prefer to clearly define the task up front and then get out of the way. The problem with this is that sometimes people have a very different picture in their head as to what a successful outcome looks like.

Didier tweeted this great HBR post on how to get involved without micromanaging last week and it reminded my of a checklist that I started using with my development team to improve the outcome from tasks I set - "CPQQRT". Despite it's clumsy name, quickly running through this in my head whenever I had to task my team really helped in aligning expectations.
  • Context: what is the background?
  • Purpose: what are you trying to achieve?
  • Quality: what quality level is required?
  • Quantity: how much is necessary?
  • Resources: what resources are at your disposal?
  • Time: how long have you got to get it done?
Here's an example task for a software developer to illustrate. The task: "Create a report of all outstanding software defects and an overview of our defect management process".
  • Context: The customer is concerned with the amount of time we've been spending fixing defects so they'd like to better understand what's outstanding, and get an overview of our defect management approach.
  • Purpose: We need to give the customer confidence that we are on top of the current defects and that our defect management approach is thorough without being over the top.
  • Quality: Needs to be presentable to the customer and the data has to be accurate.
  • Quantity: A couple of pages will do - e.g., a summary table of defects, and a process diagram with some explanation will be sufficient
  • Resources: You can use tool xyz to draw up the process flow if you like, talk to Bob if you need help on how to drive it
  • Time: Need the report by Friday, make it your top priority.
Our developer should have a clear idea on what is expected of him and the outcome is unlikely to result in any surprises.

Friday, June 10, 2011

Anyone there?

It's been ages since the last post on this blog. Several reasons for this including me being a little lazy with it but mainly that I'm working in a new startup Culture Amp. I've been putting a few blogs together at the company blog at http://blog.cultureamp.com.

Most recent posts I've done are around people development including:
Take a look and please comment and contribute over there.

Cheers,
Rod

Saturday, October 2, 2010

Trampoline day - September 2010



I attended my first trampoline day in Melbourne last week. Trampoline is an interesting concept - a self organising "un-conference" where the attendees build the agenda as they go, and then facilitate a session for 20 minutes. Sessions can be presentations, workshops, or simply open discussions on anything that the facilitator finds amazing.

Situated at Circus Oz in Port Melbourne this year, there were about 150 attendees and there were some interesting sessions ranging from the political, spiritual, scientific and pure entertainment...

I attended the following sessions:
  • Born to run - taught me why I should be running in bare feet.
  • A solution focussed approach to change - if only we could take this approach in the corporate IT world more often.
  • How to 2-step to punk music - a bit like this
  • My TV Remote sucks - usability 101
  • Ecosystems and the God Particle - inspired by the book Massive
  • Coffee with @jonathannen and @dougenglish - not really a session but a good coffee break
  • Anatomy of a web startup - how to raise your first million from someone that had
  • The Future of Money - is money evil?
Pretty much all sessions were interesting, especially those that pitted the hippies against the scientists which made for entertaining "un-collaboration".

Looking forward to the next one...

    Sunday, August 29, 2010

    Unit testing EJB 3.1 with Derby DB and embedded Glassfish

    Unit testing EJBs has always been a bit of a pain as you'd need to deploy them to a container first. This is something that JEE6 addresses by specifying an embeddable implementation of the container. Using the embeddable container is simple enough, but I wanted to unit test an EJB that acted as a façade to a JPA entity backed by a derby database. I've had all sorts of problems getting this going.

    I originally tried this based on sample code from chapter 6 of Antonio Goncalve’s book JEE6 with Glassfish 3 which despite being an excellent book, I couldn't get the example working. I've been trying to get the sample going for a little while and followed heaps of different blogs and ideas but was still unsuccessful. Recently I was pointed to this great post which gave me enough info to get it working so I figured I'd re-post the example from Antonio's book with some info so that anyone bashing their head against the same wall as I was could get it running.

    Taking the original code from Chapter6 of Antonio’s book it looks something like this:

    • Book.java: a JPA entity that handles storage and retrieval of book data.
    • BookEJB: A Stateless Session bean that invokes the JPA entity to uses it
    • BookEJBTest: A class to test the EJB using JUnit test.
    • persistence.xml: configuration file specifying and persistence units.

    The issue with the sample code is that when an instance of the embedded container is created,
    it doesn’t successfully find BookEJB to deploy and it doesn’t have any knowledge of the jdbc-resource-pools required for a JTA data source.

    The original code for instantiating the embedded container is:
    public static void initContainer() throws Exception {
    ec = EJBContainer.createEJBContainer();
    ctx = ec.getContext();
    }
    To resolve these issues we need to do a few things:
    • Create a simple glassfish domain with a domain.xml configuration file that contains the relevant jdbc-connection-pools, and jdbc-resources.
    • Additionally, we need to tell the embedded container where to look for this stuff so we replace the original code with some pointers to where to find the classes
    To do this, replace the provided initContainer method with the following code.

    @BeforeClass
    public static void initContainer() throws Exception {
    Map<String, Object> properties = new HashMap<String, Object>();
    properties.put(EJBContainer.MODULES, new File("target/classes"));
    properties.put("org.glassfish.ejb.embedded.glassfish.installation.root", "myGlassfish");
    properties.put(EJBContainer.APP_NAME, "chapter06");
    ec = EJBContainer.createEJBContainer(properties);
    ctx = ec.getContext();
    }


    Specifying the "EJBContainer.MODULES" property tells the embedded container explicitely where to look for EJBs to deploy.

    Specifying the "org.glassfish.ejb.embedded.glassfish.installation.root" property, allows the container to find the resource adapters and domain.xml.

    When looking up the BookEJB, you need to use the fully qualified name (note the app name 'chapter06' that we specified as a property in the modified code:
    BookEJB bookEJB = (BookEJB) ctx.lookup("java:global/chapter06/BookEJB!org.beginningee6.book.chapter06.BookEJB");


    Now it’s just a matter of putting the files in the right locations, ensuring your persistence.xml and domain.xml are consistent and the unit test will work...

    Assuming you already have the original code (including all chapters etc), unpack the updated code in the same location, run 'mvn clean test' and it should all hang together...

    Here's a screen shot of the workspace...




    Monday, July 12, 2010

    Stop, collaborate and listen - Jodoro


    I caught up with some ex-colleagues for a few fat yaks the other day and it turns out they have founded their own company jodoro. Based on their experience as Software Architects working in anger with large enterprises trying to sort out their data models, they've put together a collaborative data modelling tool and service "graft" which provides a new way of attacking the data model collaboration problem.

    Graft is hosted in the cloud and provides organisations with a collaborative data modelling environment within which models can be developed, extended, communicated, and published completely on-line. It's in early release stage at the moment, but it has a great user experience and I can see how such a tool could be used to assist in many problem areas including:
    • efficient development, communication, collaboration, and implementation of data models on development projects.
    • organisations looking to adopt and collaborate on industry standard models

    Delivery projects
    Graft makes sense on several recent projects I've worked on. On these projects a large delivery team (50+) has had to design, develop, share, communicate and collaborate, on a data model in a tool with a circa 1990's collaboration vibe (thick desktop client interacting with a model repository file. Whilst it was designed for multi user operation, it still has one foot firmly entrenched in single user world). The project essentially used the tool as diagraming software - taking screenshots of the model, pasting into a word document and then emailing to the client for review. The graft tool would move this process to a completely online world where all users interacted with the actual data model, and I can see a future where organisations could configure their own governance workflow process in the tool to keep the Enterprise Architects happy. It's only early days, but the collaboration experience will only get better as new features are added.

    Adopting industry standards
    Graft could also help organisations in adopting industry standard models and collaborating with other organisations in the same domain (assuming they would play nice). For example, graft allows organisations to take a standard model, select the parts of the model that are relevant to them, and leave all other model elements as 'passive' (they still exist but are not implemented). Over time the organisation can choose to include these model elements and along the way can publish their model publicly so that other players can extend the model and in turn republish their own customisations. This feature makes it very attractive to organisations that want to move to an industry standard model, but roll it out in bite size chunks rather than scoffing the entire thing. This suits the project funded world that we live in where the first passenger never wants to pay for the entire bus.

    Check out the graft tool here http://www.jodoro.com and if you want to have a crack at developing your own model, I recommend watching the 3 minute video first.


    Saturday, December 12, 2009

    Tumbleweed at the Hifi


    Welcome back Tumbleweed... One of my all time favorite bands, I have fond memories of all ages Tumbleweed gigs at Wall Street (now Hifi bar) . They played there again on Thursday night to a sold out crowd about 15 years since the last time they graced the stage with their unique brand of fuzz fueled rock. I think I recognised most of the people there too, they just looked 15 years older than last time I saw them. Tumbleweed were typically brilliant. Here's hoping they keep going and start writing again...

    Wednesday, November 25, 2009

    Mission Critical SOA

    I was trawling through some old presentations over the weekend and I stumbled across something I’d presented on “Mission Critical SOA” at an Enterprise Java Australia event a couple of years ago. A colleague and I put this together a little earlier on in the evolution of SOA when it was closer to the height of its hype cycle and promising to be the answer to every CIO’s problems. Having worked through a number of challenging SOA implementations since then, the guts of this presentation is still very relevant so I figured I’d reproduce the main ideas here in a blog entry. The original presentation can be downloaded here

    What is Mission Critical?


    It’s clear that we’ve become increasingly reliant technology as we’ve evolved. You only have to watch how kids interact today to see technology is embedding itself deeper into way we function. I used to talk to my friends for fun but these days it’s not uncommon for kids to communicate by occasionally passing each other an earphone for a quick listen, followed by a smile and a nod then, back to the iPhone – I must be getting real old. Whilst this is hardly a mission critical situation, the basic foundations upon which we live are supported by technology.
    • We flick the switch and we expected the lights to turn on.
    • We turn the tap we expect water to flow
    • We get on a flight, we expect to arrive at our destination safely
    • We dial ‘000’ we expect to get an emergency operator
    We expect mission critical technology to just work. If it doesn’t, bad stuff happens – lives may be lost, someone may lose plenty of dough, or someone’s reputation gets a caned.

    So mission critical can be seen as the “technology pillars of life” and no doubt, we’ve made our lives easier, but to the extent that we’ve become complacent to the risk of these pillars crumbling, we’ve also made our lives more dangerous.
    Mission Critical technologies have to just work - failure is not an option. Under this simple façade how do we actually address real mission critical concerns to make sure solutions never go down, handle exceptions elegantly, ensure data accuracy when handling massive throughput.

    We’ve become reasonably good at dealing with many of these concerns but does mission critical and SOA work together?


    SOA + Mission Critical


    In 2007, Gartner predicted a few things:


    "SOA will be used in more than 50% of new mission-critical operational applications and business processes designed in 2007 and in more than 80% by 2010."

    "New software products for SOA have hit the market, but given their immaturity, have disappointed users in terms of reliability, performance and productivity."
    We’re nearly at 2010, and whilst 80% is a big call, no doubt, everyone seems to be implementing SOA and which is now on the “slope of enlightenment” beginning to meet our expectations – or as Matt Wright commented at last weeks EJA futures event, this may be more about a shift in our expectations of SOA. The one thing Gartner did say back then that resonates strongly is that in many cases “SOA principles have been applied too rigidly, and this has led to unsatisfactory outcomes as projects became too costly and didn’t meet deadlines” We are still some way from maturing to the extent that we can reliably delivery Mission Critical SOA solutions. Addressing this challenge requires us to distinguish between means and ends.

    Means and Ends

    The fundamental business outcome (ends) we are striving for in any SOA delivery is business agility; the ability for the business to adapt to changing needs. We’re looking for rapid delivery cycles, shorter time to value, lower delivery risk and only an incremental delivery costs when introducing new capabilities.
    Underpinning agility we have the “enabling” outcomes. These are the outcomes we strive for that naturally result in agility. We want maximised reuse, infinite extensibility and maximised interoperability all leading to agility. It’s obvious why we strive for these outcomes;
    • We want reuse, so we abstract service designs to produce agnostic services that are not tied to a specific business process.
    • We want extensibility, so we design loosely coupled SOA solutions that minimise dependencies between services allowing easy adaptation to future needs.
    • We want interoperability, so we follow industry standards to maximise the possibility for re-use and easy integration.
    So far so good… This is all a part of the standard formula for SOA benefits realisation, however, when you add Mission Critical to the equation, a tension arises between our means for SOA outcomes and fundamental mission critical requirements such as Performance, Reliability and Availability.



    Abstraction and Reuse


    High levels of re-use on individual services leads to increased performance, reliability and availability requirements on these reused services.
    A mobile subscriber service at the centre of the universe for a Telco can lead to a single point of failure should an enterprise rely heavily on this service to deliver core business functionality. This service must now meet the performance needs of all consumers that depend on its functionality. The service must also be as available and reliable as the neediest consumer. The key point here, is the more we centralise solution logic into reusable services, the more we need to consider the ability of the service to meet NFR requirements now and into the future.

    Extensibility


    Similar tensions exist when considering extensibility; In focussing on designing loosely coupled services we distribute functionality across a service taxonomy. In doing so, we significantly increase the number of service to service interactions, leading to performance overheads, especially when using a standard protocol such as SOAP/HTTP.


    Interoperability


    Whilst adopting standards such as SOAP/HTTP is a great idea in our quest for ultimate interoperability, we also adopt it’s baggage of being a verbose communications protocol leading to runtime performance overheads, and it’s inability to support reliable messaging and transactional integrity, all leading to reliability concerns.


    Whilst there are many Web Service standards (WS-*) aiming to address these issues, they are at differing levels of maturity and as such are not supported by all SOA stacks available.
    Some recommendations Firstly, it’s important to acknowledge that many of the levers sit with the technology rather than the architecture. The most brilliant architecture, implemented poorly will ultimately result in failed outcomes So:
    1. Review the SOA principles and determine which ones are important to you
    2. Define a set of standards and patterns to which the organisation will follow, but the key is to make these guidelines, the default position, and deviate where the benefits outweigh the costs.
    As an example of this, on a recent project for one of our customers we’ve implemented an SOA solution with a typical mission critical profile:
    • 24x7 uptime
    • Transactional integrity is paramount
    • Transactions per day is in the millions
    Some key considerations are:
    1. Solution is to be based on standard communications protocols (SOAP/HTTP)
    2. Solution is to be developed using the provided SOA stack and technologies (e.g. BPEL) as far as possible
    3. Solution must scale up and down as far as possible
    Given this, we have 2 options.
    1. Throw a wall of silicon at it just to get it to run at the required volumes
    2. Deviate from the standards where necessary to get the required performance.
    In this case we’ve taken the decision to make use of native communication protocols via WSIF (RMI-IIOP) and built in SOA stack optimisations to significantly improve transactional performance and to also provide transactional integrity support between service calls. The idea here is to define standards and use them as the ‘default’ where it is fit for purpose.

    Break the rules provided you are doing it for the right reasons, and in a controlled manner.
    The right reasons means we must understand the rules, why they are there and what their limitations are. We must also understand the technology, not only the standards but how the product sets implement them, in order to understand any traps. A controlled way means we must ensure there is some governance to avoid throwing the baby out with the bathwater. Establish an Architecture Review Committee, and ensure they don’t operate in a vacuum. There should be a good mix of architects, business representatives and hands on technical specialists to get the best holistic “outcome”.

    The uptake - if managed effectively we can achieve most of the benefits of SOA and meet all of our mission critical drivers without introducing prohibitive cost
    .

    When should we use SOA
    ?

    So when is it appropriate to adopt a vanilla SOA approach versus an approach that requires deviation from SOA principles?

    This can be modeled via the following quadrant - the horizontal axis shows increasing levels of change and/or reuse with the vertical showing an increasing level of mission criticality



    • Sweet spot (Green): In the bottom right hand corner, with high levels of re-use, and low levels of mission criticality, we get the most our of a vanilla SOA approach. Just follow the rules and watch the benefits roll in.
    • Easy Cases (yellow): Here we have low mission criticality, but also low levels of re-use. Here we need to understand the business case for SOA. Do the additional SOA overheads such as governance really make sense? Here we should optimise for budget.
    • Hard Cases (red): As we increase the mission-criticality, we need to start thinking about the trade-offs. Do we compromise robustness? Budget (additional effort and hardware)? The hard cases are where we need to optimise the use of stock standard SOA for the outcomes we are trying to achieve and this is the arena I’ve been discussing.
    • Mission Critical (blue): It’s these cases where we really need to think about what we are trying to achieve. Does it make sense for this to be an SOA solution, what are the real tangible benefits that we can derive from an SOA approach? What defines success? Can SOA deliver it?


    SOA doesn’t replace everything that preceded it, and it is not “the one true path”. Like any technology, it builds on successful ideas from the past and leverages new technology innovations. There is great value in SOA but there is even greater value in perspective. Understand your business first. Then do what makes sense.

    The mapping between concept and reality is not transparent. The truth of the matter today is that to realise your vision, you do need expertise in the underlying technologies — what works well and what doesn’t.


    While all of us work in an abstract industry, we are here, at the end of the day, to deliver tangible things. No matter how elegant, how compliant and how service oriented an architecture is, the business just wants a robust solution that works.