This is the second of two posts from Craig Byers first published on the Is the BBC biased? website.
One of the less-reported things about last week’s European Scrutiny Committee’s encounter with the three top BBC bosses was that it discussed something close to our hearts: monitoring bias.
What I took away from it was that after the Wilson Report into the BBC’s (pro-) EU coverage, the BBC had pledged to put some form of monitoring into place but that, having tried doing so, has now abandoned monitoring again and won’t be re-introducing it in the run-up to the EU referendum.
Sir Bill Cash, repeatedly citing News-watch‘s close monitoring of the BBC’s EU coverage, argued that the BBC ought to be carrying out such monitoring and making its finding publicly available for people to check . He wants a Hansard-style logging system, comparable to News-watch’s extensive archive of transcriptions, and, given its huge budget and sheer size, wanted to know why the BBC isn’t doing so?
The most concise statement of the BBC’s position came from David Jordan, the BBC’s head of editorial policy and standards:
I think we gave up the monitoring that the chairman is talking about at the time because we found it to be actually very unhelpful and not helpful at all in even deciding and defining whether we were impartial.
And I think in the context of other appearances and elections we’ve discovered the same thing. For example, if you’re covering an election how do you define somebody who’s on a particular party but it opposing something that party is doing at the time they were appearing on the radio? Are they, as it were, in that party’s column or are they in another column that tells you what they were doing? It becomes very, very confusing and doesn’t necessarily sum up the nuances and differences that exist in election campaigns in our experience.
So that was the reason I think why we gave it up.
It was also very, very expensive and time-consuming too.
And we thought that allowing editors to be essentially responsible for impartiality in their output and having an overall view which we get through a series of meetings and discussions which take place in the BBC, were a better way to ensure we achieved impartiality that through simple number-counting.
I have to say I laughed when he said that such monitoring had proved to be “actually very unhelpful and not helpful at all”. Cynically, I thought, “I bet it wasn’t – especially if it came up with the ‘wrong’ results” (a bit like the Balen report?)
I didn’t buy his example either. For me, it’s hardly rocket science to, say, note in one column that Kate Hoey is a Labour Party representative and in another column to note that she’s anti-EU. I can’t see why that would be “very, very confusing”.
Also, I don’t buy the it’s “very, very expensive and time-consuming too” argument either. If a small number of people at News-watch can monitor and transcribe every EU-related interview on major BBC programmes over many, many years then surely an organisation of the size and resources of the BBC can run something similar for its major news bulletins and flagship programmes too. It’s not that difficult. I work full-time and still managed to monitor every political interview on all the BBC’s main current affairs programmes for nine months (in 2009-10) – and at no expense whatsoever!
Also, if you simply rely on editorial judgement – on both the small and large scales (in individual programmes and at senior editorial meetings) – then many individual biases could result and multiply. In an organisation containing so many like-minded people as the BBC, those biases would doubtless head in the same direction and become self-reinforcing. Therefore, they probably won’t be spotted as biases at all – merely sensible, impartial BBC thinking. Who then would be able to point out that it isn’t being impartial after all?
Given that many people think that this kind of groupthink the problem and that, as a result, the BBC are blind to their own biases, asking us to trust the judgements of BBC editors en masse isn’t likely to reassure us….
….which is where what David Jordan derisively calls “number-crunching” comes in.
If over a year of, say, Newsnight there are 60 editions that deal with the UK-EU relationship in some way. Say 55 of those editions featured a pro-Stay guest but only 35 featured a pro-Leave guest, then number-crunching surely would surely raise a serious question about the programme’s impartiality?
If, say, nine of those pro-Leave guests came from UKIP and the other 26 came from the Conservatives but no pro-Leave Labour or Green guests appeared then that would also surely indicate a serious bias?
Is it really beyond the ability of programme editors to count and record such figures – and to then make them publicly available?
If their figures show exceptional impartiality (45 pro-Stay, 45 pro-Leave guests), then they will surely win more people over, wouldn’t they?
What would they have to lose?