The Data Day

I remember reading an essay - I think it was by Paul Feyerabend - that argued against holding science in a higher esteem that other methods of reasoning. I was angry with the article at the time. To me, a recent science graduate, it was as if he was kicking the world I knew from under my feet. He seemed to argue that science isn't important.

It's taken me a couple of years to get, but I think I understand a bit what he was on about. I vaguely remember the phrase scientism being bandied about in the article. As I remember, scientism is essentially the belief that science holds all of the answers. Thinking about it now, Mr Feyerabend was not arguing against science but against scientism.

And this is something I think I understand now.

Cryptic research notes

A project at work


There's an ongoing project at work with the goal of making our website easier for the people who use it to find the information they are looking for. Technically this is a really easy project. Sure, there is some web development work to do and there will be work to move the pages to new sections, but this is all stuff a web team is trained for. What's tricky is that there are a significant number of internal (and some external) stakeholders to please. Nothing new there.

Before we could even start the project, we invited a third party to carry out some research into the way our current audiences use our website. Some research has proved really valuable - we did some excellent interviews with users of our website, something that we'd never done before. But the research does have its flaws.

Flaws in the research


A major flaw - at least in my opinion - is the limited sampling we did in the quantitative phase of the research. Our sample of users was too small and too narrow to draw meaningful conclusions.

Firstly, I'm not sure we had a large enough sample size to make fairly important decisions on the locations of sections of webpages. For example how can we say "75% users clicked on this button to find information" when we only have data from six user sessions? (This isn't to say, I don't think we should make those changes - there are plenty of other reasons to do so - just that the research that we did does not categorically prove that the changes are a good idea.)

Secondly, the sampled group is a very specific audience of our website. One specific audience of several specific audiences. "How many users did you speak to who use our section of the website?" It's a fair question to ask and the fair answer is "One".

Finally, we haven't had the chance to argue against the research presented to us. More than that, we haven't had a chance to see the raw data - or I haven't had a chance anyway. I'm not saying the research is bad, or that I don't trust the research. The agency we worked with is one of the best in the field and have worked with a tonne of other charities. What I mean is that a major part of understanding a piece of research is to pick apart its flaws.

Difficult discussions


Lots of my discussions with internal stakeholders have been made difficult by having to follow the findings of research that I never had a say in. It's hard to disagree with a colleague who's asking about the strength of the research when you yourself believe there are flaws in it.

I don't mean to bash the research too much. As I wrote above, a lot of it is awesome - particularly the qualitative bits. But we do seem to be unquestioningly accepting the data, using it to make the difficult parts of a project easier (ie convincing stakeholders that that the project's a good idea) through the belief that "they can't argue with the data."

Perhaps that's what scientism means in the workplace - a belief that any data offer an inarguable justification for a project to continue. For me, we should view the initial research more like a baseline to test future interventions against. We started the project by asking an incredibly vague question and did not carry out the number or range of experiments (ie user experience interviews or screen captures) to answer it.

Future direction


I'd love for us to start thinking more like scientists. We have a whole document of testable hypotheses (provided by the agency we worked with) for improving user experience across our website - we now need to devise the rigorous experiments that will disprove them. And we need to be open with the people concerned about "their webpages" that we're starting with hypotheses and that this means uncertainty.

But we will use experiments to test our changes. And more importantly, a failed experiment is incredibly useful as it will allow us to develop a new hypothesis - moving our website towards something that all users find intuitive.

Comments

Popular posts from this blog

level 17

Level 16

This is notpr0n...