Peer review, Quality and Open Access (2)

As with all issues that occupy me from time to time, I find myself wondering if I have anything worthwhile to add to what others have already written. (One well written extensive source  is Peter Suber’s Open Access page.).

This shows that Quality considerations (for me) are everywhere, and certainly already at the very beginning of any creative undertaking. Too much perfectionism and apprehension `is what I have to say worthwhile’ can be very blocking. Yet if what I wish to say has already been said more clearly by others, then I would do myself and others a favor by going for a walk instead of writing my jumbled thoughts. (Except that maybe some would find solace in seeing a fellow jumbled thinker… 🙂 ).

So this blog is written in a compromise vein: recognizing my jumbled-thinking limitations but allowing for the possibility that somewhere in these musings someone may find something worthwhile. But what does that mean, worthwhile?

Quality, I mean to say, is paramount.

But I have not yet come across a sharp definition of quality, in any field. We seem to rely on a limited number of mechanisms to identify say `high quality’, `fair quality’, `passable quality’, `mediocre’ and `crap’, and anything beyond and in between.

Let me list some of these mechanisms, the ones that come to my jumbled mind at this moment:

  1. Personal appreciation
  2. Creators’ peer review (`expert’ review)
  3. Consumers’ peer review (what do my buddies think, what does my boss think, what do my children think, is this `cool’ or what…)
  4. Result matching objective (winning a sports event, turning a business profit, building  a nuclear bomb that works, a portrait from which the portrayed person is recognized by others,…)
  5. Sustainability
  6. Originality
  7. Conformity
  8. ….
  9. ….

Clearly these mechanisms overlap, being interdependent and in constant state of re-evaluation etc. Estimate the complexity, exponentiate this and you are still probably short of what really takes place here. This amazing complexity helps to explain why hardly anyone ever seems to question the validity of our Quality-Assessment Framework (QAF) which I tried to outline with the above mechanisms.

And yet, our history shows that our past QAF’s have been wrong more often than not…in the long run, viewed from our current QAF 🙂

From the happy-go-lucky daily life point of view, this is hardly a new insight. But scientifically, it should raise more than one doubt about the validity of current peer review practice. In other disciplines where experts dominate the QAF similar doubts should be raised, I think.

One other issue which I will try not to forget in subsequent posts is that `result-matching-objective quality’ to me often appears as narrow-sighted (short-sighted if you wish), which I tried to exemplify above with the working nuclear bomb.

(to be continued)


About fwaaldijk

mathematician (foundations & topology in constructive mathematics) and visual artist
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s