Cite as Rose, D. C., Parker, C., Park, C. Fodey, J., Sutherland, W. J., and Dicks, L. V. 2018. Involving stakeholders in agricultural decision support systems: improving user-centred design, International Journal of Agricultural Management 6 (3-4): 80-89
Now available open access!
Our new paper is out today! The paper looks at how we can encourage designers of decision support systems to adopt user-centred design practices – in other words, focusing on the needs of the intended end user so that systems are relevant, usable, and credible. Our motivation for this work came from a study as part of Defra’s Sustainable Intensification Platform – this research found that many decision support systems in agriculture were simply never used in practice. What a waste of time and money!
We suggest a six-stage process for designing decision support systems and term this a ‘decision context assessment’. We argue that funders of projects should require researchers to produce a plan which pays careful attention to each of these six stages. Ultimately, this will improve the chances of the system being used in practice – if it isn’t actually used by the intended user community, then lots of funders’ money and scientific time is wasted! The six stages are:
Common sense will tell you that in order to carry out each stage effectively, you need to involve (or at least think about) the user at every single stage!!!
“1) Who is the user? – identify a clear user, understand their workflows, and ask about their needs. Needs and workflows differ between audiences – you can’t assume that a one-size-fits-all approach will work! – result: tool designed for a clear audience!
2) Why should they want to use it? – scientifically, the system might be robust and impressive, but ask whether there is a need for it from a user perspective. Ask whether it is better than how decisions are currently made (you’ll need to ask users how they currently make decisions!). You need to prove that it adds value to the user, either financially, or it saves time, or it helps them meet compliance or market requirements etc. – result: tool has a unique selling point and is needed!
3) Can they use it? – test whether users are able to use it effectively, also find out whether users can practically use it in a given setting (e.g. is there internet access on-farm?) – result: tool works in the intended use setting (e.g. on a farm)
4) Is it easy to use? – related to point 3, however there is a distinction between merely being able to use it, and the ability to use it easily – ask about user design preferences and test it on actual end users rather than colleagues! – result: tool is easy to use, users actually want to use it.
5) Is there a delivery plan? – ask how users will find out about the system. This might involve making use of existing trusted peer and adviser networks. – result: tool becomes well-known to its intended users.
(6) What is the legacy? – if the tool needs to be consistently updated to maintain relevance, then consider how to do this once funding ends. Do you need to maintain a technical support helpline for users? – result: tool continues to work long after implementation.”
We hope that this paper fosters a greater interest in user-centred design methods, and ultimately to the design of usable systems which make a difference in policy and practice! If funders made sure that tool designers constantly reported against the criteria above, then tools would have a clear audience and unique selling point, work on the ground and be easy to use, as well as being well-known to users and continuing to work after implementation.
***Open access ***