A World of Choices
How many articles have you seen this week talking about the latest technology to help improve your life? Nevermind the number of things telling you about what the latest health scare, what does or doesn't give you cancer this week, and how to find investment $$$ for your latest idea.
What about your work life? How the Internet of Things is coming to connect everything under the sun. There is a new technology you aren't using for project/people/process management, or how you should remove your org chart in favor of a flat corporate structure.
How do you make sense of what you should adopt, what is hype, and what you should avoid entirely?
If you had infinite time and resources you could give everything a test-run to see what sticks. Unfortunately most of us are not in that position. How do people like us manage growth without interrupting what is already working?
We are constantly bombarded with new software versions and new operational methodologies. Our job is to first sift through all of these things and find what we think will fit with our customers' needs so we can provide them with the tools they can best utilize.
The first thing we do when assessing a new idea is to appropriately set our expectations for what it can do. Until we have hands on experience suggesting otherwise we scale back marketing promises. 99.9% of the time the reality of a product is somewhere between zero and where the marketing material says it is. With a conservative estimate of what the product will deliver we can dive into step 2.
We take our estimated benefits and value of a product and spend a little time imagining how it will impact our workflow, and integrate with our customers' systems. We also take into account any potential pitfalls with existing systems as well as seek out new potential pitfalls with a new system. The trick here is to take an 80/20 approach by spending a small amount of time on a thought exercise to understand where we think the end result will be.
Notice at this point we have not spent any money, invested any development time, or integrated anything.
A lot of times we can stop here. A new product may not integrate well with existing systems, might introduce new shortcomings without addressing existing ones, or simply may not be in the wheelhouse of what will benefit our customers.
If the results of the thought experiment intrigue us we will invest time in actually learning the software. This gives us a deeper understanding of the capabilities and either confirms or denies our hunches about functionality, shortcomings, and overall value. Again, the goal here is to spend a small amount of time learning without investing in a full-scale deployment.
If the idea makes it through this phase we can be confident in its ability to provide value and we will add it to our repertoire. Keeping in mind the idea of the 80/20 principle we can then move onto the next step.
Now we are confident in the idea or product. This is the time to start finding where it fits into your to-do list. If it is something with a low overall cost and a high impact it would likely be at the top. If instead it takes education, has high costs, or other hurdles preventing it from being a slam-dunk it will go towards the middle or bottom of the list while those hurdles are overcome.
When it comes time to make the investment in the new technology we take a similar approach with a small investment up front to understand the overall impact, only it is a step further as we have already vetted a lot of the risks and rewards.
Rinse and Repeat
Before we would ever suggest someone do a full-scale implementation of a new technology we will start with a pilot version. For an MES system this might be tracking 5% of your total process, for a QA/QC system this would be a handful of tests our of many, or for equipment integration it would mean starting with the first portion of the line only. The goal here is to walk our customer through the process outlined above with their involvement and investment.
If the pilot program works, only then would we suggest expanding to a full implementation. If for some reason it doesn't work out we can take another approach without having invested a lot of time and resources on an experiment.
Our approach is pretty simple. It is based on the scientific method where we come up with a hypothesis, "Is this going to do what the brochure says?", and run some experiments to give us an answer. From there we do more experiments and as the data provides a more complete picture we put more faith in the tools if they are working, or can quickly adapt and use a new approach if they are not working.