I’m not sure how I got this — the to line on the email is not me — but it appears that Mr. Schlesinger is a prose poet. I received this email this evening.
I have an insect friend – a lone wasp clinging to my refrigerator, 11
days into winter.
S/he/it seems sad and forlorn, and me also about her/him/it.
T’was there last night, when first I encountered my new friend.
T’was there this morning and early afternoon before I went to give the
final exam in my Climate & Climate Change course.
Is there now.
I have just smeared some honey beneath my new friend, not knowing
whether or not this would be food therefor.
But, s/he/it is doomed, as is an annual flowering plant.
So too am I, hopefully sometime later.
So too are all of us, eventually.
We need to take care of each other.
We need to take care of others who cannot take care of themselves, for
We need to take care of our home planet, a singular and unique being
in our Solar System, and likely far beyond.
Fortunate we are to have her, Mother Earth.
Let’s keep her and all of us safe & well.
My little friend has been energized and is now actively consuming the
additional honey I have given her/him/it.
It does feel good to help a fellow Earthling.
Not that a scientist can’t have metaphysical leanings, but I find it interesting how quasi-religious this email sounds.
This module is in the documents\cru-code\linux\mod directory. Apparently, this module allows the user to manually specify “forcing” and “climate sensitivity” to run through the computations.
Forgive me if I’m slow but isn’t that what these “models” are supposed to do?
Any insight would be greatly appreciated.
! module procedure written by Tim Mitchell
! includes subroutines necessary for pattern scaling to equilibrium
subroutine GetKaySet (KaySet,Forc2co2,FitAlpha,FitBeta,Sens2co2Init,TrendTLen)
real, intent(out) :: Forc2co2,FitAlpha,FitBeta,Sens2co2Init
integer, intent(in) :: KaySet ! may be MissVal or zero or >=1
integer, intent(out) :: TrendTLen
real, parameter :: MissVal = -999.0
integer :: AllocStat,ReadStatus
integer :: QKaySet
if (KaySet.EQ. 0) then ! allow selection of QKaySet if required
print*, ” > These are the available sets of constants: ”
print*, ” > 1 : 3.47, 1.3388, -96.613, 1.9, 100″
read (*,*,iostat=ReadStatus), QKaySet
if (ReadStatus.LE.0.AND.QKaySet.GE.1.AND.QKaySet.LE.1) exit
QKaySet = KaySet
Okay, so the module displays on the screen five constants and tells the user to select one. I’ve not yet found the KaySet file, but it seems a bit odd that the program is allowing the user to select these values.
if (QKaySet.EQ. 1) then ! allow designation of constants if possible
Forc2co2=3.47 ; FitAlpha=1.3388 ; FitBeta=-96.613 ; Sens2co2Init=1.9 ; TrendTLen=100
QKaySet = MissVal
if (KaySet.EQ.MissVal) then ! designate constants individually
print*, ” > Enter the radiative forcing for a doubling of CO2: ”
read (*,*,iostat=ReadStatus), Forc2co2
if (ReadStatus.LE.0.AND.Forc2co2.GT.0) exit
WHAT??? They’re having the user enter the value for radiative forcing? Isn’t that supposed to be a variable determined by the model???
print*, ” > Enter the alpha and beta parameters for dS/dT=alpha*e(beta*dT/dt): ”
read (*,*,iostat=ReadStatus), FitAlpha, FitBeta
if (ReadStatus.LE.0) exit
print*, ” > Enter the initial climate sensitivity for a doubling of CO2: “
Once again, WHAT?????? How is this determined? Since it’s a user-specified value, where does the user get the appropriate value to enter?
read (*,*,iostat=ReadStatus), Sens2co2Init
if (ReadStatus.LE.0.AND.Sens2co2Init.GT.0) exit
print*, ” > Enter the period length over which to calc dT/dt: ”
read (*,*,iostat=ReadStatus), TrendTLen
if (ReadStatus.LE.0.AND.TrendTLen.GT.0) exit
end subroutine GetKaySet
end module PattScale
And then I see, in documents\yamal\sf2.txt:
Stepan Shiyatov said me you need only data covered last 2 millenium. Now I send data of 35 samples covered earlier millenium. These are all samples concerning this period and they are checked (there are about 130 more samples from 0 to 1800 AD not checked at this time). I hope your desire to see low growth about 350 BC will be more or less satisfied. However for some reason there are no good correlation between number of samples and growth rate. For instance, about 700 BC provided by only one sample with very high growth during just this period. I don’t know why, may be number of trees depends on burial conditions as well.
I have to note that 364 BC (not 360 BC as I wrote before) on sample No. 60 slightly looks like false. On sample 453 it is normal ring, on other sample it is very small. Therefore I can’t still say something definitely.
” I hope your desire to see low growth about 350 BC will be more or less satisfied”???? I’m guessing that low growth = low temperature in CRUland, and if that’s the case is the emailer hoping that the data proves to be of Keith’s liking? And what’s the deal with more samples isn’t turning into the desired growth rate? It sounds a lot like cherrypicking to me.
I understand those who are struggling to find any way to justify what is contained in the .zip file. Some say that the above code is meaningless unless proven to have been used in a publically distributed document. Others say the email is irrelevant. But taken together, it’s irrefutable that this group has been, and still is, attempting to defraud the people of the world using garbage data.
Thank you for your e-mail below.
I am a bit confused thereby.
Is the code you are plowing through one that is used to analyze the
temperature observations, or is it a general circulation model used to
simulate past, present and possible future climates?
Also, please explain what you mean by “the outright fraud committed in
Lastly, your physicist friend is not correct about the relative role
of greenhouse gases and the sun.
I attach a paper wherein we analyzed the contribution by both and by
Since our analysis in the above paper in 2000, the variations in the
output of the sun constructed from proxies, such as sunspots, for the
period before we started to observe the sun from space, 1978 to the
present, have been reduced by about a factor of 5.
If you wish I can send you Judith Lean’s paper on this.
We have now extended the analysis in our paper above to include about
10 more years of temperature observations, and the result is shown here:
You can see that the contribution to the observed warming by the sun
is quite small.
If you are interested, I can answer the question in the right-hand
Lastly, here I share with you an update of the graph that appeared in
Andy Revkin’s blog on 30 November:
You can find what I wrote there at: http://dotearth.blogs.nytimes.com/2009/11/30/more-on-the-climate-files-and-climate-trends/
I may be delayed in responding to any further e-mail messages from
you, as tomorrow is my last class in my Climate & Climate Change
course, with its final exam being this Friday. So, I will need to
grade this exam and my students’ term papers. But, thereafter I will
respond to any further e-mail messages you send me.
One final thought. I have 3 children aged 37, 35 and 17 years old.
And I have 6 grandchildren aged from 6 years to 1 year old. I too am
very concerned about their welfare, the welfare of their children, and
the welfare of all children everywhere, especially in Africa & south
Asia, which locations will be hit hardest by human-caused global-
warming/climate change. If you wish I can elaborate this. What is
not widely appreciated is that the climate change that we have caused
and are causing has a very long lifetime because Earth is 70% covered
by ocean which, unlike the land, can move vertically, thereby removing
heat from the surface and delaying the time required to re-equilibrate
the climate system. If you wish, I can expand on this further.
But for now I must return to preparing for my last class tomorrow.
After reading the obnoxiously threatening email Mr. Schlesinger sent to a reporter, I emailed him my thoughts on the subject. Amazingly, he emailed back. The response is below.
While I was pleasantly surprised that his tone was far more reasoned than the email he composed to the reporter, I wasn’t shocked that he assumes me too ignorant to know that physics is hardly a hands-down support system for AGW.
Thank you for your e-mail message to me below.
Science has known for over 100 years that our burning fossil fuels – Nature’s gift to humanity, without which we would have been in a perpetual dark age – would cause global warming/climate change.
The physics underpinning this is irrefutable.
The physical evidence of human-caused global warming/climate change is all around us, and is undeniable.
We can either choose to:
(1) Ignore this physics and physical evidence of global warming/climate change and, thereby, risk the irreversible outcome therefrom;
(2) Face the problem squarely and begin the very difficult task of transitioning ourselves this century from the fossil-fuel age to the post fossil-fuel age.
In my public lectures and debates, I advise the world to choose Option 2.
I sense that in about 20 years time, if I am still alive, people will say to me: “Why didnt you tell us about human-caused global warming/climate change?” I will reply, ”But I did, for almost 60 years.” ”Yes, they will say, but why did you not make us believe it?” And I will respond, “Because you chose to not so do”.
I can no longer aid a journalist who aids those who recommend Option 1, thereby putting the world at great risk. And so I have now ceased to do so – the ‘Great Cutoff’.
Elena, you are probably much younger than I, hence this is your planet.
I hope that you will make informed decisions about her well-being and yours.
P.S. What does GIGO-laden code mean?
On Dec 7, 2009, at 6:38 PM, emelks wrote:
reveals you as a thug, not a scientist. Enjoy what little limelight remains, we the people aren’t going to submit to your GIGO-laden code.
My response to him:
Thank you for a reasoned response.
I’ve spent the past week plowing through the code leaked from UEA and am appalled at the outright fraud committed in the code. It’s obscene.
My best friend is a physicist and he refutes the notion that physics absolutely proves AGW. He posits that solar fluctuations are solely responsible for climate variation and that the brouhaha over AGW is nothing more than politics abusing science as an alternative to religion.
Furthermore, as a serious gardener I watch weather carefully and I can say with absolute certainty that the climate in my area has cooled noticeably in the past 4 years. I don’t need Kevin Trenberth to tell me what my own experience has proven. That the junk code I’ve read can’t predict a significant trend in the other direction–even with, or perhaps due to, the hardcoded “fudge factors”–indicates to me that models aren’t worth the electrons used to run them.
I am young, with children, and I refuse to throw away their futures based upon fatally flawed models and politicians’ rhetoric. I want my children to enjoy the prosperity and freedoms with which I grew up and I will not stand idly by and watch it all purposefully destroyed to advance anyone’s political agenda.
Thanks again, and I hope you plan to revisit your assumptions based upon the information revealed from UEA.
PS–GIGO means garbage in garbage out.
So why did Australia unexpectedly dump their “cap and trade” scheme? Could it be related to the following from the HARRY_READ_ME.txt file? Where they talk about throwing 2/3rds of the Australia data out, then bringing it back in because dumping it for HADCRUT3 would reveal the lies processed in HADCRUT2? The money quote is at the end:
I’ve tried the simple method (as used in Tim O’s geodist.pro, and the more complex and accurate method found elsewhere (wiki and other places). Neither give me results that are anything near reality.
Does anyone else get a laugh out of these supposedly brilliant “scientists” going to WIKIPEDIA to get code?
I’ve tried to snip out the worst of the code-mumbo-jumbo for those not inclined to read code, but left enough that no one may claim I’m taking anything “out of context.”
Decided to process temperature all the way. Ran IDL:
then glo2abs, then mergegrids, to produce monthly output grids. It apparently worked:
As a reminder, these output grids are based on the tmp.0705101334.dtb database, with no merging of neighbourly stations and a limit of 3 standard deviations on anomalies.
Decided to (re-) process precip all the way, in the hope that I was in the zone or something. Started with IDL:
“Hoping” to be close to right? Sounds like junk code to me. But I digress.
Then glo2abs, then mergegrids.. all went fine, apparently.
31. And so.. to DTR! First time for generation I think.
Wrote ‘makedtr.for’ to tackle the thorny problem of the tmin and tmax databases not being kept in step. Sounds familiar, if worrying. am I the first person to attempt to get the CRU databases in working order?!! The program pulls no punches. I had already found that tmx.0702091313.dtb had seven more stations than tmn.0702091313.dtb, but that hadn’t prepared me for the grisly truth:
Yes, the difference is a lot more than seven! And the program helpfully dumps a listing of the surplus stations to the log file. Not a pretty sight. Unfortunately, it hadn’t worked either. It turns out that there are 3518 stations in each database with a WMO Code of ‘ 0’. So, as the makedtr program indexes on the WMO Code.. you get the picture. *cries*
Rewrote as makedtr2, which uses the first 20 characters of the header to match:
The big jump in the number of ‘surplus’ stations is because we are no longer automatically matching stations with WMO=0.
Here’s what happened to the tmin and tmax databases, and the new dtr database:
Old tmin: tmn.0702091139.dtb Total Records Read: 14309
New tmin: tmn.0705162028.dtb Total Records Read: 14106
Del tmin: tmn.0702091139.dtb.del Total Records Read: 203
Old tmax: tmx.0702091313.dtb Total Records Read: 14315
New tmax: tmx.0705162028.dtb Total Records Read: 14106
Del tmax: tmx.0702091313.dtb.del Total Records Read: 209
New dtr: dtr.0705162028.dtb Total Records Read: 14107
*sigh* – one record out! Also three header problems:
BLANKS (expected at 8,14,21,26,47,61,66,71,78)
Why?!! Well the sad answer is.. because we’ve got a date wrong. All three ‘header’ problems relate to this line:
6190 94 95 98 100 101 101 102 103 102 97 94 94
..and as we know, this is not a conventional header. Oh bum. But, but.. how? I know we do muck around with the header and start/end years, but still..
Wrote filtertmm.for, which simply steps through one database (usually tmin) and looks for a ‘perfect’ match in another database (usually tmax). ‘Perfect’ here means a match of WMO Code, Lat, Lon, Start-Year and End-Year. If a match is found, both stations are copied to new databases:
Old tmin database: tmn.0702091139.dtb had 14309 stations
New tmin database: tmn.0705182204.dtb has 13016 stations
Old tmax database: tmx.0702091313.dtb had 14315 stations
New tmax database: tmx.0705182204.dtb has 13016 stations
I am going to *assume* that worked! So now.. to incorporate the Australian monthly data packs. Ow. Most future-proof strategy is probably to write a converter that takes one or more of the packs and creates CRU-format databases of them. Edit: nope, thought some more and the *best* strategy is a program that takes *pairs* of Aus packs and updates the actual databases. Bearing in mind that these are trusted updates and won’t be used in any other context.
From Dave L – who incorporated the initial Australian dump – for the tmin/tmax bulletins, he used a threshold of 26 days/month or greater for inclusion.
Obtained two files from Dave – an email that explains some of the Australian bulletin data/formatting, and a list of Austraian headers matched with their internal codes (the latter being generated by Dave).
Actually.. although I was going to assume that filtertmm had done the synching job OK, a brief look at the Australian stations in the databases showed me otherwise. For instance, I pulled all the headers with ‘AUSTRALIA’ out of the two 0705182204 databases. Now because these were produced by filtertmm, we know that the codes (if present), lats, lons and dates will all match. Any differences will be in altitude and/or name. And so they were:
crua6 diff tmn.0705182204.dtb.oz tmx.0705182204.dtb.oz | wc -l
..so roughly 100 don’t match. They are mostly altitude discrepancies, though there are an alarming number of name mismatches too. Examples of both:
0 -3800 14450 8 AVALON AIRPORT AUSTRALIA 2000 2006 -999 -999.00
0 -4230 14650 595 TARRALEAH CHALET AUSTRALIA 2000 2006 -999 -999.00
Examples of the second kind (name mismatch) are most concerning as they may well be different stations. Looked for all occurences in all tmin/tmax databases:
crua6 grep ‘TARRALEAH’ *dtb
tmn.0702091139.dtb: 0 -4230 14650 585 TARRALEAH VILLAGE AUSTRALIA 2000 2006 -999 -999.00
tmn.0702091139.dtb:9597000 -4230 14645 595 TARRALEAH CHALET AUSTRALIA 1991 2000 -999 -999.00
tmn.0705182204.dtb: 0 -4230 14650 585 TARRALEAH VILLAGE AUSTRALIA 2000 2006 -999 -999.00
tmn.0705182204.dtb:9597000 -4230 14645 595 TARRALEAH CHALET AUSTRALIA 1991 2000 -999 -999.00
tmx.0702091313.dtb: 0 -4230 14650 595 TARRALEAH CHALET AUSTRALIA 2000 2006 -999 -999.00
tmx.0702091313.dtb:9597000 -4230 14645 595 TARRALEAH CHALET AUSTRALIA 1991 2000 -999 -999.00
tmx.0705182204.dtb: 0 -4230 14650 595 TARRALEAH CHALET AUSTRALIA 2000 2006 -999 -999.00
tmx.0705182204.dtb:9597000 -4230 14645 595 TARRALEAH CHALET AUSTRALIA 1991 2000 -999 -999.00
This takes a little sorting out. Well first, recognise that we are dealing with four files: tmin and tmax, early and late (before and after filtertmm.for). We see there are two TARRALEAH entries in each of the four files. We see that ‘TARRALEAH VILLAGE’ only appears in the tmin file. We see, most importantly perhaps, that they are temporally contiguous – that is, each pair could join with minimal overlap, as one is 1991-2000 and the other 2000-2006. Also, we note that the ‘early’ one of each pair has a slightly different longitude and altitude (the former being the thing that distinguished the stations in filtertmm.for).
Finally, this, from the tmax.2005120120051231.txt bulletin:
95018, 051201051231, -42.30, 146.45, 18.0, 00, 31, 31, 585, TARRALEAH VILLAGE
So we can resolve this case – a single station called TARRALEAH VILLAGE, running from 1991 to 2006.
But what about the others?! There are close to 1000 incoming stations in the bulletins, must every one be identified in this way?!! Oh God. There’s nothing for it – I’ll have to write a prog to find matches for the incoming Australian bulletin stations in the main databases. I’ll have to use the databases from before the filtertmm application, so *0705182204.dtb. And it will only need the Australian headers, so I used grep to create *0705182204.dtb.auhead files. The other input is the list of stations taken from the monthly bulletins. Now these have a different number of stations each month, so the prog will build an array of all possible stations based on the files we have. Oh boy. And the program shall be called, ‘auminmaxmatch.for’.
Assembled some information:
crua6 wc -l *auhead
(the ‘auhead’ files were created with )
Actually, stopped work on that. Trying to match over 800 ‘bulletin’ stations against over 3,000 database stations *in two unsynchronised files* was just hurting my brain. The files have to be properly synchronised first, with a more lenient and interactive version of filtertmm. Or… could I use mergedb?! Pretend to merge tmin into tmax and see what pairings it managed? No roll through obviously. Well it’s worth a play.
..unfortunately, not. Because when I tried, I got a lot of odd errors followed by a crash. The reason, I eventually deduced, was that I didn’t build mergedb with the idea that WMO codes might be zero (many of the australian stations have wmo=0). This means that primary matching on WMO code is impossible. This just gets worse and worse: now it looks as though I’ll have to find WMO Codes (or pseudo-codes) for the *3521* stations in the tmin file that don’t have one!!!
OK.. let’s break the problem down. Firstly, a lot of stations are going to need WMO codes, if available. It shouldn’t be too hard to find any matches with the existing WMO coded stations in the other databases (precip, temperature). Secondly, we need to exclude stations that aren’t synchronised between the two databases (tmin/tmax). So can mergedb be modified to treat WMO codes of 0 as ‘missing’? Had a look, and it does check that the code isn’t -999 OR 0.. but not when preallocating flags in subroutine ‘countscnd’. Fixed that and tried running it again.. exactly the same result (crash). I can’t see anything odd about the station it crashes on:
0 -2810 11790 407 MOUNT MAGNET AERO AUSTRALIA 2000 2006 -999 -999.00
2000 339 344 280 252 214 202 189 196 262 291 316 377
2001 371 311 310 300 235 212 201 217 249 262 314 333
2002-9999-9999 339 297 258 209 205 212 246 299 341 358
2003 365 367 336 296 249 195 193 200 238 287 325 368
2004 395 374 321 284 219 214 173 188 239 309 305 370
2005 389 396 358 315 251 182 189 201 233 267 332 341
2006 366 331 314 246 240-9999-9999-9999-9999-9999-9999-9999
.. it’s very similar to preceding (and following) stations, and the station before has even less real data (the one before that has none at all and is auto-deleted). The nature of the crash is ‘forrtl: error (65): floating invalid’ – so a type mismatch possibly. The station has a match in the tmin database (tmn.0702091139.dtb) but the longitude is different:
0 -2810 11780 407 MOUNT MAGNET AERO AUSTRALIA 2000 2006 -999 -999.00
0 -2810 11790 407 MOUNT MAGNET AERO AUSTRALIA 2000 2006 -999 -999.00
It also appears in the tmin/tmax bulletins, eg:
7600, 070401070430, -28.12, 117.84, 16.0, 00, 30, 30, 407, MOUNT MAGNET AERO
Note that the altitude matches (as distinct from the station below).
Naturally, there is a further ‘MOUNT MAGNET’ station, but it’s probably distinct:
9442800 -2807 11785 427 MOUNT MAGNET (MOUNT AUSTRALIA 1956 1992 -999 -999.00
9442800 -2807 11785 427 MOUNT MAGNET (MOUNT AUSTRALIA 1957 1992 -999 -999.00
I am at a bit of a loss. It will take a very long time to resolve each of these ‘rogue’ stations. Time I do not have. The only pragmatic thing to do is to dump any stations that are too recent to have normals. They will not, after all, be contributing to the output. So I knocked out ‘goodnorm.for’, which simply uses the presence of a valid normals line to sort. The results were pretty scary:
Stations retained: 5026
Stations removed: 9283
GOODNORM: Extract stations with non-missing normals
Please enter the input database name: tmx.0702091313.dtb
The output database will be called: tmx.0705281724.dtb
(removed stations will be placed in: tmx.0705281724.del)
Stations retained: 4997
Stations removed: 9318
Essentially, two thirds of the stations have no normals! Of course, this still leaves us with a lot more stations than we had for tmean (goodnorm reported 3316 saved, 1749 deleted) though still far behind precipitation (goodnorm reported 7910 saved, 8027 deleted).
I suspect the high percentage lost reflects the influx of modern Australian data. Indeed, nearly 3,000 of the 3,500-odd stations with missing WMO codes were excluded by this operation. This means that, for tmn.0702091139.dtb, 1240 Australian stations were lost, leaving only 278.
This is just silly. I can’t dump these stations, they are needed to potentially match with the bulletin stations. I am now going to try the following:
1. Attempt to pair bulletin stations with existing in the tmin database. Mark pairings in the database headers and in a new ‘Australian Mappings’ file. Program auminmatch.for.
2. Run an enhanced filtertmm to synchronise the tmin and tmax databases, but prioritising the ‘paired’ stations from step 1 (so they are not lost). Mark the same pairings in the tmax headers too, and update the ‘Australian Mappings’ file.
3. Add the bulletins to the databases.
OK.. step 1. Modified auminmaxmatch.for to produce auminmatch.for. Hit a semi-philosophical problem: what to do with a positive match between a bulletin station and a zero-wmo database station? The station must have a real WMO code or it’ll be rather hard to describe the match!
Got a list of around 12,000 wmo codes and stations from Dave L; unfortunately there was a problem with its formatting that I just couldn’t resolve.
So.. current thinking is that, if I find a pairing between a bulletin station and a zero-coded Australain station in the CRU database, I’ll give the CRU database station the Australian local (bulletin) code twice: once at the end of the header, and once as the WMO code *multiplied by -1* to avoid implying that it’s legitimate. Then if a ‘proper’ code is found or allocated later, the mapping to the bulletin code will still be there at the end of the header. Of course, an initial check will ensure that a match can’t be found, within the CRU database, between the zero-coded station and a properly-coded one.
Debated header formats with David. I think we’re going to go with (i8,a8) at the end of the header, though really it’s (2x,i6,a8) as I remember the Anders code being i2 and the real start year being i4 (both from the tmean database). This will mean post-processing existing databases of course, but that’s not a priority.
A brief (hopefully) diversion to get station counts sorted. David needs them so might as well sort the procedure. In the upside-down world of Mark and Tim, the numbers of stations contributing to each cell during the gridding operation are calculated not in the IDL gridding program – oh, no! – but in anomdtb! Yes, the program which reads station data and writes station data has a second, almost-entirely unrelated function of assessing gridcell contributions. So, to begin with it runs in the usual way:
> ***** AnomDTB: converts .dtb to anom .txt for gridding *****
> Enter the suffix of the variable required:
> Will calculate percentage anomalies.
> Select the .cts or .dtb file to load:
> Specify the start,end of the normals period:
> Specify the missing percentage permitted:
> Data required for a normal: 23
> Specify the no. of stdevs at which to reject data:
But then, we choose a different output, and it all shifts focus and has to ask all the IDL questions!!
> Select outputs (1=.cts,2=.ann,3=.txt,4=.stn):
> Check for duplicate stns after anomalising? (0=no,>0=km range)
> Select the .stn file to save:
> Enter the correlation decay distance:
> Submit a grim that contains the appropriate grid.
> Enter the grim filepath:
> Grid dimensions and domain size: 720 360 67420
> Select the first,last years AD to save:
> NORMALS MEAN percent STDEV percent
> .dtb 7315040 73.8
> .cts 299359 3.0 7613600 76.8
> PROCESS DECISION percent %of-chk
> no lat/lon 17911 0.2 0.2
> no normal 2355275 23.8 23.8
> out-of-range 13253 0.1 0.2
> accepted 7521013 75.9
> Calculating station coverages…
And then.. it unhelpfully crashes:
> ##### WithinRange: Alloc: DataB #####
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Ho hum. I did try this last year which is why I’m not tearing my hair out. The plan is to use the outputs from the regular anomdtb runs – ie, the monthly files of valid stations. After all we need to know the station counts on a per month basis. We can use the lat and lon, along with the correlation decay distance.. shouldn’t be too awful. Just even more programming and work. So before I commit to that, a quick look at the IDL gridding prog to see if it can dump the figures instead: after all, this is where the actual ‘station count’ information is assembled and used!!
..well that was, erhhh.. ‘interesting’. The IDL gridding program calculates whether or not a station contributes to a cell, using.. graphics. Yes, it plots the station sphere of influence then checks for the colour white in the output. So there is no guarantee that the station number files, which are produced *independently* by anomdtb, will reflect what actually happened!!
Well I’ve just spent 24 hours trying to get Great Circle Distance calculations working in Fortran, with precisely no success. I’ve tried the simple method (as used in Tim O’s geodist.pro, and the more complex and accurate method found elsewhere (wiki and other places). Neither give me results that are anything near reality. FFS.
This is interesting. Apparently the Met office, Newcastle University and UAE recognized before 1/2/2007 that their data is a disaster and applied for money from DEPRA to attempt to make sense of it. They also claim they’ll put it all on the web. The file is depra.pdf.
Dealing with the possible consequences of climate change depends on understanding predictions and taking action to mitigate against predicted changes, to adapt, or both. Deciding whether to take action will require weighing up risks and benefits and evaluating alternative strategies. Decision makers will range from individuals, through local government, to national governments and intergovernmental negotiators, and in the public sector alone, cover a gamut of professions from engineers and educators to policy makers and scientists.
Making policy requires access to knowledge, not just the underlying information and data. While data leads to information and knowledge, the steps from data to knowledge in the climate prediction arena can involve handling tens of terabytes of data (in information terms: roughly equivalent to several copies of the British Library’s entire holdings), as well as significant knowledge of the tools (models) used to create the simulations, and a background in both environmental sciences and sophisticated statistics. Managing the underlying data itself is a problem, once data volumes become large enough, hardware and software problems that are rare with small data volumes become common enough that mitigation strategies against failure within the data archive itself are necessary. Holding high volume complex data over time introduces new problems involving format migration and semantic interoperability. Software must also be produced to visualise and extract data (that might be input to other tools such as flood predictions), before producing policy relevant advice.
Defra has funded, and continues to fund, projects which produce climate prediction data, scenarios and advice for the UK climate impacts community. This work is one part of that continuum of research activity, covering the reliable storage of climate data and predictions, and the interfaces to that data to make it usefully available to the impacts community, who themselves provide policy relevant advice. Data will be extracted from the archives of the Met Office Hadley Centre (MOHC) and made available by the British Atmospheric Data Centre (BADC), a national repository for storing digital environmental data for the long term (BADC expertise and additional funding via the National Centre for Atmospheric Science will also ensure that the data will be held for posterity). The result of this phase of the work will be prototype systems coupling interfaces to the data archives developed together by the BADC and Newcastle University (both world leaders in developing web-based interfaces to complex geophysical data). The University of East Anglia and the MOHC will provide expert advice. The eventual goal will be to provide data access to both experts in the climate impacts community and the general public via these interfaces but such deployment will be expected in a future phase of Defra supported activity. The first phase, covered by this proposal, will take eighteen months. The second phase (not covered here), would improve the prototype and then provide and support public access and should begin near month twelve of this project, and continue for at least two years. The project will be carried out in close partnership with the Defra funded UK Climate Impacts Programme (UKCIP) and will contribute to the Intergovernmental Panel on Climate Change (IPCC)’s Data Distribution Centre (DDC). Although there will be three significant components to the work (known as the Data Delivery Package [DDP], the Climate Impacts LINK Project, and the DDC), this activity will eventually (during phase two) create a joined-up resource that serves the whole community, from research scientist to town planner.
The project takes a significant leap forward from its predecessors, exposing cutting edge science involving complex probabilistic datasets and exploiting a Weather Generator (developed in another Defra project) to produce sample time series of weather conditions at specific UK locations in the future. We will also exploit new metadata standards developed both within this project and others with which the project participants are involved. The underlying archives will provide tens of terabytes in reliable network attached storage with multiple gigabit bandwidth to the wider Internet. The data interfaces will be state-of-the art, and, where appropriate, exploit the latest standards-compliant metadata structures and interfaces to make the best use of both technology and experience in other communities. An active climate scientist who is also an expert on data systems will provide UK representation on the IPCC Task Group on Data and Scenario Support (TGICA). To avoid duplication of effort between the components, the entire activity will be supported by a common management infrastructure and technical service layer which will dovetail with existing complex data and information systems at the BADC.
Although this project proposal outlines developments to deliver a system fit for deployment in a phased follow-on project, some aspects of the project, namely LINK and IPCC-DDC components, will include operationally deployed services during this prototype development phase.
This project will provide a prototype system capable of delivering ground-breaking climate change scenarios to the public and policy makers via the web. This will enable interaction with probabilistic climate datasets in a manner that the user can pose a useful question and receive a response that is both informative and retains the uncertainty inherent in climate change predictions. In the follow-up project the deployed system will allow decision-makers and industry to plan their strategies in response to indicative predictions of future climate.
The integration of the DDP with a Weather Generator model will demonstrate how such tools can be employed to add value to climate model output. It will also provide users with access to high temporal resolution data not previously available.
The LINK component (and DDP in the follow-on project) will deliver considerable usage of Defra-funded climate research outputs (UKCIP predict that potentially more than 1000 users will wish to access the DDP system). The users will make use of the data in a variety of ways including: informing policy, making strategic decisions, aiding research, exploring possible climate scenarios and understanding climate models. The IPCC-DDC element will allow climate researchers greater access to data by incorporating existing and new datasets into the BADC’s existing infrastructure.
The following reports will chart progress of the project and provide a commentary of the outputs:
– Quarterly reports, summary Financial Year reports, Annual reports, Final reports to Defra.
– Periodic reports to the TGICA.
Note that the LINK archive will also be of benefit in upcoming IPCC asessment activities (for example, it is expected that the next assessment report will use a distributed archive).
16. Staff effort
(a) Please list the names and grades/job titles of staff and their input to the project together with their unit costs e.g. daily charge-out rates (note 13)
Dr B Lawrence (Project Lead) Band 2 – 108 days at £750/day
Mr A Stephens (Technical Lead) Band 4 – 258 days at £511/day
Dr K Marsh (LINK Manager) Band 4 – 300 days at £511/day
Ms S Latham (Project Manager) Band 4 – 43 days at £511/day
Met Office Hadley Centre:
Science Support Role – 312 days at £356/day
Technical Development Role – 86 days at £356/day
Mr G Hobona – 323 days at £355/day
Mr P James – 22 days at £399/day
Technical Asst – 32 days at £103/day
University of East Anglia:
Professor P Jones – 2 days at £344/day
Dr C Harpham – 105 days at £166/day
18. Please give below the address to which payments should be made.
Council for the Central Laboratory of the Research Councils (CCLRC)
Rutherford Appleton Laboratory
Declaration (to be completed by a duly authorised signatory of the proposer’s organisation)
Mr Tony Wells
Head of Sales Contracts
While browsing through the FOIA2009.zip files, I ran across the term “post-normal science” in a Word document called HOT_proposal.doc. Having never heard of it, I ran a google on it and came across the following:
Lead Authors: Silvio Funtowicz and Jerry Ravetz
“In the sorts of issue-driven science relating to the protection of health and the environment, typically facts are uncertain, values in dispute, stakes high, and decisions urgent. The traditional distinction between ‘hard’, objective scientific facts and ‘soft’, subjective value-judgements is now inverted. All too often, we must make hard policy decisions where our only scientific inputs are irremediably soft. The requirement for the “sound science” that is frequently invoked as necessary for rational policy decisions may affectively conceal value-loadings that determine research conclusions and policy recommendations. In these new circumstances, invoking ‘truth’ as the goal of science is a distraction, or even a diversion from real tasks. A more relevant and robust guiding principle is quality, understood as a contextual property of scientific information.
A picture of reality that reduces complex phenomena to their simple, atomic elements can make effective use of a scientific methodology designed for controlled experimentation, abstract theory building and full quantification. But that is not best suited for the tasks of science-related policy today. The traditional ‘normal’ scientific mind-set fosters expectations of regularity, simplicity and certainty in the phenomena and in our interventions. But these can inhibit the growth of our understanding of the new problems and of appropriate methods for their solution.“
To summarize—or reiterate for those whose jaws are still bouncing off the keyboard—the point of “post-normal” science is to forego standard scientific methodology (data collection, results replication, etc.) and move on to more “holistic” methods of determining fact from fiction. Rather than those pesky steps of the scientific method, post-normal science claims to integrate the natural world with “social systems” to create:
“. . ..the appropriate methodology for integrating with complex natural and social systems.
When a problem is recognised as post-normal, even the routine research exercises take on a new character, for the value-loadings and uncertainties are no longer managed automatically or unselfconsciously. As they may be critical to the quality of the product in the policy context, they are the object of critical scrutiny by researchers themselves as well as by the peers, ordinary and extended. Thus ‘normal science’ itself becomes ‘post-normal’, and is thereby liberated from the fetters of its traditional unreflective, dogmatic style.
The facts that are taught from the textbooks used in training institutions are still necessary, but they are no longer sufficient. Contrary to the impression that the textbooks convey, in practice most problems have more than one plausible answer, and many have no well-defined scientific answer at all.”
And what to replace all those facts with? “Quality,” a term for which I can find no definition that fits this insane construct. The closest I came was the following from http://www.ijtr.org/Vol%201%20No1/4.%20Pereira_Funtowicz_IJTR_Article_Vol1_no1.pdf .
“As stated earlier, transdisciplinary practise arose as a response to the increasing complexity of scientific knowledge production, and the need to re-establish an active dialogue among a plurality of disciplines and forms of knowledge (Nicolescu 1999). This requirement now extends beyond the inter-operability of methods and techniques coming from different scientific disciplines; it is in fact a quest for quality, not (just) excellence in scientific terms, or (just) reliable knowledge but robustness also in societal terms (Gibbons 1999). The aim of knowledge quality assurance by extended peer review is precisely to open processes and products of policy relevant science to those who can legitimately verify its relevance, fitness for purpose and applicability in societal contexts, contributing with “extended insights and knowledge”.
Transdisciplinary practice and extended peer review face common challenges such as, for example, resistances and closure of institutional or established practice in research and policy, different conceptual and operational framings, knowledge representations and mediation (Guimarães Pereira & Funtowicz 2005). Both require processes of knowledge representation and mediation as the means to actually reconcile different types of knowledge, enhance the quality of policy processes.
TIDDD like tools are interfaces of mediation between policy spheres and other sectors of the society. This mediation is done with the help of experts, but what comes out of the GOUVERNe process is that a new class of expert is emerging, experts in creating contexts for co-production of knowledge, experts in mediation of different types of knowledge, perspectives and values, and eventually experts in making scattered nonorganised pieces of relevant knowledge intelligible to the organised and sometimes poorly flexible institutions: in a sense transdisciplinary experts.
Trans-disciplinarity practice and extended quality assurance processes are about conviviality of different knowledges. It is hoped that tools like TIDDD can help to create the spaces where co-production and integrations take place. The GOUVERNe TIDDD are in fact a transdisciplinary platforms.
Finally, transdisciplinary research entails more than “just” acknowledgement of different perspectives, it requires “language” harmonisation and social, cultural and political contextualisation. Transdisciplinary work requires more than “just” articulation of disciplinary work; it requires institutions, cultures, histories to be reflected in the methodological approaches adopted to address a specific problematique, since contextual uniquenesses do show on the ways people interpret events and respond to those and also on the relationships that can be established with the research community.
The work on TIDDD was financed by the European Commission under the GOUVERNe project (EC project # EVK1-1999-00032).
Based upon the search results, this “post-normal” scheme was created specifically for the environmental governance movement. Open your favorite search engine, key “post-normal science” +define quality into the box and browse the results.
And the Dr. Frankenstein behind this monster is none other than the EU. What have you old worlders done?
Apparently, someone at Hadley is rather enamored of this approach. From the HOT_proposal file:
“Climate change scientists are unable to define what would be an acceptable level and time-frame for global concentrations of greenhouse gases to be stabilised. This is because the evaluation of climate change risks is essentially a political issue. Moreover, scientific uncertainties make it very difficult to assess the likelihood of possible climate change events and thus to quantify the risks of climate change. In short, the climate change issue is characterised as an unstructured problem where both the values at stake as well as the science is uncertain and subject of debate.
This type of post-normal science problem requires a methodological framework within which scientists, policy makers and other stakeholders can enter into a dialogue to assess what level of ‘danger’ (in terms of possible impacts) could be attached to different levels of climate change, what could be the implications of false policy responses (policies being either too loose or too stringent), and hence, what long-term concentration levels (or alternative policy indicators) may be considered acceptable and non acceptable, and on what grounds (criteria/values).”
The properties/details screen for HOT_proposal.doc lists “ineke” as the author, with 11/28/02 @ 11:54 am as the document creation date.
The point of the proposal?
The purpose of the HOT project is to help better articulate and operationalise the ultimate objective as stated in Article 2 of the Climate Change Convention in specific terms on the basis of a science based policy dialogue. Issues to be addressed include the impacts upon stakeholders of various levels of stabilization of greenhouse gas concentrations; costs and opportunities for mitigation/adaptation in different regions given national circumstances, the implications of climate change and mitigation/adaptation for sustainable development; and approaches to decision making for article 2 of the UNFCCC.
The project aims to:
• link the debate on medium-term (post 2012) climate policy targets to long-term perspectives on effective and fair climate change impact control and sustainable development;
• facilitate a scientifically well-informed dialogue amongst climate change policy stakeholders about the options for defining what would constitute dangerous interference with the climate system; as covered by Article 2 of the FCCC;
• improve insights in differences of perspectives and common ground for building policy action; and
• provide insights into options for fair and effective post-Kyoto global climate change regimes for mitigation, impacts and adaptation.
The objectives of this Phase 1 proposal are:
• To identify the possible participants in such a dialogue and to secure their commitment to the project;
• To come to a common problem definition, dialogue agenda and methodology that will allow for effective and fair participation of all participants in the dialogue on Article 2.
• To prepare a detailed project proposal for the dialogue phase, and
• To generate support amongst the policy and funding community for such a dialogue.
And who will be involved in this supposed dialogue? From the word document:
Asia 2 ( + 4) 2 ( + 4) 2 ( + 4) 2 (+ 4)
Africa 2 ( + 4) 2 ( + 4) 2 ( + 4) 2 ( + 4)
Lat Am 2 ( + 4) 2 ( + 4) 2 ( + 4) 2 ( + 4)
OECD/EIT4 ( +8) 4 ( + 4) 2 ( + 4) 2 ( + 4)
N.B. The numbers outside parenthesis indicate the participants selected for the international dialogue. The numbers inside the parenthesis indicate the participants that also participate in the regional dialogues.
A whopping 36 people to supposedly represent all of us who aren’t the least interested in bowing to “post-normal” science.
The proposal author cites Funtowicz thrice, so deniability is not an option.
I still haven’t been able to determine whether this proposal ever went anywhere. I wasn’t able to find it in the spreadsheets included in the .zip file but that’s hardly conclusive.
One may reasonably conclude, however, that CRU isn’t too terribly concerned with that “old” science based upon their criminally slipshod code, and their invocation of “post-normal” science gives them the perfect platform from which to launch the hostile takeover of the free world using euphemisms that would make Orwell blush. Stakeholders? Please.
If anyone knows whether this proposal was accepted I’d love to know where, when and by whom.