Musings on Climategate

CRU code: You gotta be kidding!

Posted in AGW Political, AGW Rhetorical by emelks on December 9, 2009

This module is in the documents\cru-code\linux\mod directory. Apparently, this module allows the user to manually specify “forcing” and “climate sensitivity” to run through the computations.

Forgive me if I’m slow but isn’t that what these “models” are supposed to do?

Any insight would be greatly appreciated.

! pattscale.f90
! module procedure written by Tim Mitchell
! includes subroutines necessary for pattern scaling to equilibrium

module PattScale

implicit none

contains

!*******************************************************************************

subroutine GetKaySet (KaySet,Forc2co2,FitAlpha,FitBeta,Sens2co2Init,TrendTLen)

real, intent(out) :: Forc2co2,FitAlpha,FitBeta,Sens2co2Init

integer, intent(in) :: KaySet ! may be MissVal or zero or >=1
integer, intent(out) :: TrendTLen

real, parameter :: MissVal = -999.0

integer :: AllocStat,ReadStatus
integer :: QKaySet

!***************************************

if (KaySet.EQ. 0) then ! allow selection of QKaySet if required
print*, ” > These are the available sets of constants: ”
print*, ” > 1 : 3.47, 1.3388, -96.613, 1.9, 100″
do
read (*,*,iostat=ReadStatus), QKaySet
if (ReadStatus.LE.0.AND.QKaySet.GE.1.AND.QKaySet.LE.1) exit
end do
else
QKaySet = KaySet
end if

Okay, so the module displays on the screen five constants and tells the user to select one. I’ve not yet found the KaySet file, but it seems a bit odd that the program is allowing the user to select these values.

if (QKaySet.EQ. 1) then ! allow designation of constants if possible
Forc2co2=3.47 ; FitAlpha=1.3388 ; FitBeta=-96.613 ; Sens2co2Init=1.9 ; TrendTLen=100
else
QKaySet = MissVal
end if

if (KaySet.EQ.MissVal) then ! designate constants individually
print*, ” > Enter the radiative forcing for a doubling of CO2: ”
do
read (*,*,iostat=ReadStatus), Forc2co2
if (ReadStatus.LE.0.AND.Forc2co2.GT.0) exit
end do

WHAT??? They’re having the user enter the value for radiative forcing? Isn’t that supposed to be a variable determined by the model???

print*, ” > Enter the alpha and beta parameters for dS/dT=alpha*e(beta*dT/dt): ”
do
read (*,*,iostat=ReadStatus), FitAlpha, FitBeta
if (ReadStatus.LE.0) exit
end do

print*, ” > Enter the initial climate sensitivity for a doubling of CO2: “

Once again, WHAT?????? How is this determined? Since it’s a user-specified value, where does the user get the appropriate value to enter?

do
read (*,*,iostat=ReadStatus), Sens2co2Init
if (ReadStatus.LE.0.AND.Sens2co2Init.GT.0) exit
end do
print*, ” > Enter the period length over which to calc dT/dt: ”
do
read (*,*,iostat=ReadStatus), TrendTLen
if (ReadStatus.LE.0.AND.TrendTLen.GT.0) exit
end do
end if

end subroutine GetKaySet

!*******************************************************************************
end module PattScale

And then I see, in documents\yamal\sf2.txt:

Dear Keith.

Stepan Shiyatov said me you need only data covered last 2 millenium. Now I send data of 35 samples covered earlier millenium. These are all samples concerning this period and they are checked (there are about 130 more samples from 0 to 1800 AD not checked at this time). I hope your desire to see low growth about 350 BC will be more or less satisfied. However for some reason there are no good correlation between number of samples and growth rate. For instance, about 700 BC provided by only one sample with very high growth during just this period. I don’t know why, may be number of trees depends on burial conditions as well.

I have to note that 364 BC (not 360 BC as I wrote before) on sample No. 60 slightly looks like false. On sample 453 it is normal ring, on other sample it is very small. Therefore I can’t still say something definitely.

Best wishes,
Rashit Hantemirov

” I hope your desire to see low growth about 350 BC will be more or less satisfied”???? I’m guessing that low growth = low temperature in CRUland, and if that’s the case is the emailer hoping that the data proves to be of Keith’s liking? And what’s the deal with more samples isn’t turning into the desired growth rate? It sounds a lot like cherrypicking to me.

I understand those who are struggling to find any way to justify what is contained in the .zip file. Some say that the above code is meaningless unless proven to have been used in a publically distributed document. Others say the email is irrelevant. But taken together, it’s irrefutable that this group has been, and still is, attempting to defraud the people of the world using garbage data.

Advertisements

Michael Schlesinger emails back

Posted in AGW Political, AGW Rhetorical by emelks on December 8, 2009

After reading the obnoxiously threatening email Mr. Schlesinger sent to a reporter, I emailed him my thoughts on the subject. Amazingly, he emailed back. The response is below.

While I was pleasantly surprised that his tone was far more reasoned than the email he composed to the reporter, I wasn’t shocked that he assumes me too ignorant to know that physics is hardly a hands-down support system for AGW.

emelks:

Thank you for your e-mail message to me below.

Science has known for over 100 years that our burning fossil fuels –– Nature’s gift to humanity, without which we would have been in a perpetual dark age –– would cause global warming/climate change.
The physics underpinning this is irrefutable.

The physical evidence of human-caused global warming/climate change is all around us, and is undeniable.

We can either choose to:

(1) Ignore this physics and physical evidence of global warming/climate change and, thereby, risk the irreversible outcome therefrom;

or

(2) Face the problem squarely and begin the very difficult task of transitioning ourselves this century from the fossil-fuel age to the post fossil-fuel age.

In my public lectures and debates, I advise the world to choose Option 2.

I sense that in about 20 years’ time, if I am still alive, people will say to me: ““Why didn’t you tell us about human-caused global warming/climate change?”” I will reply, “”But I did, for almost 60 years.”” “”Yes, they will say, but why did you not make us believe it?”” And I will respond, ““Because you chose to not so do””.

I can no longer aid a journalist who aids those who recommend Option 1, thereby putting the world at great risk. And so I have now ceased to do so – the ‘“Great Cutoff”’.

Elena, you are probably much younger than I, hence this is your planet.

I hope that you will make informed decisions about her well-being and yours.

Prof. Schlesinger

P.S. What does GIGO-laden code mean?

On Dec 7, 2009, at 6:38 PM, emelks wrote:

reveals you as a thug, not a scientist. Enjoy what little limelight remains, we the people aren’t going to submit to your GIGO-laden code.

Sincerely,
emelks

My response to him:

Mr. Schlesinger–

Thank you for a reasoned response.

I’ve spent the past week plowing through the code leaked from UEA and am appalled at the outright fraud committed in the code. It’s obscene.

My best friend is a physicist and he refutes the notion that physics absolutely proves AGW. He posits that solar fluctuations are solely responsible for climate variation and that the brouhaha over AGW is nothing more than politics abusing science as an alternative to religion.

Furthermore, as a serious gardener I watch weather carefully and I can say with absolute certainty that the climate in my area has cooled noticeably in the past 4 years. I don’t need Kevin Trenberth to tell me what my own experience has proven. That the junk code I’ve read can’t predict a significant trend in the other direction–even with, or perhaps due to, the hardcoded “fudge factors”–indicates to me that models aren’t worth the electrons used to run them.

I am young, with children, and I refuse to throw away their futures based upon fatally flawed models and politicians’ rhetoric. I want my children to enjoy the prosperity and freedoms with which I grew up and I will not stand idly by and watch it all purposefully destroyed to advance anyone’s political agenda.

Thanks again, and I hope you plan to revisit your assumptions based upon the information revealed from UEA.

Sincerely,
emelks

PS–GIGO means garbage in garbage out.

Why did Australia cross Cap-n-Tax Road? Could it be this?

Posted in AGW Political, AGW Rhetorical by emelks on December 7, 2009

So why did Australia unexpectedly dump their “cap and trade” scheme? Could it be related to the following from the HARRY_READ_ME.txt file? Where they talk about throwing 2/3rds of the Australia data out, then bringing it back in because dumping it for HADCRUT3 would reveal the lies processed in HADCRUT2? The money quote is at the end:

I’ve tried the simple method (as used in Tim O’s geodist.pro, and the more complex and accurate method found elsewhere (wiki and other places). Neither give me results that are anything near reality.

Does anyone else get a laugh out of these supposedly brilliant “scientists” going to WIKIPEDIA to get code?

Holy cow!

I’ve tried to snip out the worst of the code-mumbo-jumbo for those not inclined to read code, but left enough that no one may claim I’m taking anything “out of context.”

Decided to process temperature all the way. Ran IDL:

IDL> quick_interp_tdm2,1901,2006,’tmpglo/tmpgrid.’,1200,gs=0.5,dumpglo=’dumpglo’,pts_prefix=’tmp0km0705101334txt/tmp.’

then glo2abs, then mergegrids, to produce monthly output grids. It apparently worked:

[snip]

As a reminder, these output grids are based on the tmp.0705101334.dtb database, with no merging of neighbourly stations and a limit of 3 standard deviations on anomalies.

Decided to (re-) process precip all the way, in the hope that I was in the zone or something. Started with IDL:

“Hoping” to be close to right? Sounds like junk code to me. But I digress.

IDL> quick_interp_tdm2,1901,2006,’preglo/pregrid.’,450,gs=0.5,dumpglo=’dumpglo’,pts_prefix=’pre0km0612181221txt/pre.’

Then glo2abs, then mergegrids.. all went fine, apparently.

31. And so.. to DTR! First time for generation I think.

Wrote ‘makedtr.for’ to tackle the thorny problem of the tmin and tmax databases not being kept in step. Sounds familiar, if worrying. am I the first person to attempt to get the CRU databases in working order?!! The program pulls no punches. I had already found that tmx.0702091313.dtb had seven more stations than tmn.0702091313.dtb, but that hadn’t prepared me for the grisly truth:

[snip]

Yes, the difference is a lot more than seven! And the program helpfully dumps a listing of the surplus stations to the log file. Not a pretty sight. Unfortunately, it hadn’t worked either. It turns out that there are 3518 stations in each database with a WMO Code of ‘ 0’. So, as the makedtr program indexes on the WMO Code.. you get the picture. *cries*

Rewrote as makedtr2, which uses the first 20 characters of the header to match:

[snip]

The big jump in the number of ‘surplus’ stations is because we are no longer automatically matching stations with WMO=0.

Here’s what happened to the tmin and tmax databases, and the new dtr database:

Old tmin: tmn.0702091139.dtb Total Records Read: 14309
New tmin: tmn.0705162028.dtb Total Records Read: 14106
Del tmin: tmn.0702091139.dtb.del Total Records Read: 203

Old tmax: tmx.0702091313.dtb Total Records Read: 14315
New tmax: tmx.0705162028.dtb Total Records Read: 14106
Del tmax: tmx.0702091313.dtb.del Total Records Read: 209

New dtr: dtr.0705162028.dtb Total Records Read: 14107

*sigh* – one record out! Also three header problems:

BLANKS (expected at 8,14,21,26,47,61,66,71,78)
position missed
8 1
14 1
21 0
26 0
47 1
61 0
66 0
71 0
78 0

Why?!! Well the sad answer is.. because we’ve got a date wrong. All three ‘header’ problems relate to this line:

6190 94 95 98 100 101 101 102 103 102 97 94 94

..and as we know, this is not a conventional header. Oh bum. But, but.. how? I know we do muck around with the header and start/end years, but still..

Wrote filtertmm.for, which simply steps through one database (usually tmin) and looks for a ‘perfect’ match in another database (usually tmax). ‘Perfect’ here means a match of WMO Code, Lat, Lon, Start-Year and End-Year. If a match is found, both stations are copied to new databases:

[snip]

Old tmin database: tmn.0702091139.dtb had 14309 stations
New tmin database: tmn.0705182204.dtb has 13016 stations
Old tmax database: tmx.0702091313.dtb had 14315 stations
New tmax database: tmx.0705182204.dtb has 13016 stations

I am going to *assume* that worked! So now.. to incorporate the Australian monthly data packs. Ow. Most future-proof strategy is probably to write a converter that takes one or more of the packs and creates CRU-format databases of them. Edit: nope, thought some more and the *best* strategy is a program that takes *pairs* of Aus packs and updates the actual databases. Bearing in mind that these are trusted updates and won’t be used in any other context.

From Dave L – who incorporated the initial Australian dump – for the tmin/tmax bulletins, he used a threshold of 26 days/month or greater for inclusion.

Obtained two files from Dave – an email that explains some of the Australian bulletin data/formatting, and a list of Austraian headers matched with their internal codes (the latter being generated by Dave).

Actually.. although I was going to assume that filtertmm had done the synching job OK, a brief look at the Australian stations in the databases showed me otherwise. For instance, I pulled all the headers with ‘AUSTRALIA’ out of the two 0705182204 databases. Now because these were produced by filtertmm, we know that the codes (if present), lats, lons and dates will all match. Any differences will be in altitude and/or name. And so they were:

crua6 diff tmn.0705182204.dtb.oz tmx.0705182204.dtb.oz | wc -l
336

..so roughly 100 don’t match. They are mostly altitude discrepancies, though there are an alarming number of name mismatches too. Examples of both:

74c74
0 -3800 14450 8 AVALON AIRPORT AUSTRALIA 2000 2006 -999 -999.00

16c16
0 -4230 14650 595 TARRALEAH CHALET AUSTRALIA 2000 2006 -999 -999.00

Examples of the second kind (name mismatch) are most concerning as they may well be different stations. Looked for all occurences in all tmin/tmax databases:

crua6 grep ‘TARRALEAH’ *dtb
tmn.0702091139.dtb: 0 -4230 14650 585 TARRALEAH VILLAGE AUSTRALIA 2000 2006 -999 -999.00
tmn.0702091139.dtb:9597000 -4230 14645 595 TARRALEAH CHALET AUSTRALIA 1991 2000 -999 -999.00
tmn.0705182204.dtb: 0 -4230 14650 585 TARRALEAH VILLAGE AUSTRALIA 2000 2006 -999 -999.00
tmn.0705182204.dtb:9597000 -4230 14645 595 TARRALEAH CHALET AUSTRALIA 1991 2000 -999 -999.00
tmx.0702091313.dtb: 0 -4230 14650 595 TARRALEAH CHALET AUSTRALIA 2000 2006 -999 -999.00
tmx.0702091313.dtb:9597000 -4230 14645 595 TARRALEAH CHALET AUSTRALIA 1991 2000 -999 -999.00
tmx.0705182204.dtb: 0 -4230 14650 595 TARRALEAH CHALET AUSTRALIA 2000 2006 -999 -999.00
tmx.0705182204.dtb:9597000 -4230 14645 595 TARRALEAH CHALET AUSTRALIA 1991 2000 -999 -999.00

This takes a little sorting out. Well first, recognise that we are dealing with four files: tmin and tmax, early and late (before and after filtertmm.for). We see there are two TARRALEAH entries in each of the four files. We see that ‘TARRALEAH VILLAGE’ only appears in the tmin file. We see, most importantly perhaps, that they are temporally contiguous – that is, each pair could join with minimal overlap, as one is 1991-2000 and the other 2000-2006. Also, we note that the ‘early’ one of each pair has a slightly different longitude and altitude (the former being the thing that distinguished the stations in filtertmm.for).

Finally, this, from the tmax.2005120120051231.txt bulletin:

95018, 051201051231, -42.30, 146.45, 18.0, 00, 31, 31, 585, TARRALEAH VILLAGE

So we can resolve this case – a single station called TARRALEAH VILLAGE, running from 1991 to 2006.

But what about the others?! There are close to 1000 incoming stations in the bulletins, must every one be identified in this way?!! Oh God. There’s nothing for it – I’ll have to write a prog to find matches for the incoming Australian bulletin stations in the main databases. I’ll have to use the databases from before the filtertmm application, so *0705182204.dtb. And it will only need the Australian headers, so I used grep to create *0705182204.dtb.auhead files. The other input is the list of stations taken from the monthly bulletins. Now these have a different number of stations each month, so the prog will build an array of all possible stations based on the files we have. Oh boy. And the program shall be called, ‘auminmaxmatch.for’.

Assembled some information:

crua6 wc -l *auhead
1518 glseries_tmn_final_merged.auhead
1518 tmn.0611301516.dat.auhead
1518 tmn.0612081255.dat.auhead
1518 tmn.0702091139.dtb.auhead
1518 tmn.0705152339.dtb.auhead
1426 tmn.0705182204.dtb.auhead

(the ‘auhead’ files were created with )

Actually, stopped work on that. Trying to match over 800 ‘bulletin’ stations against over 3,000 database stations *in two unsynchronised files* was just hurting my brain. The files have to be properly synchronised first, with a more lenient and interactive version of filtertmm. Or… could I use mergedb?! Pretend to merge tmin into tmax and see what pairings it managed? No roll through obviously. Well it’s worth a play.

..unfortunately, not. Because when I tried, I got a lot of odd errors followed by a crash. The reason, I eventually deduced, was that I didn’t build mergedb with the idea that WMO codes might be zero (many of the australian stations have wmo=0). This means that primary matching on WMO code is impossible. This just gets worse and worse: now it looks as though I’ll have to find WMO Codes (or pseudo-codes) for the *3521* stations in the tmin file that don’t have one!!!

OK.. let’s break the problem down. Firstly, a lot of stations are going to need WMO codes, if available. It shouldn’t be too hard to find any matches with the existing WMO coded stations in the other databases (precip, temperature). Secondly, we need to exclude stations that aren’t synchronised between the two databases (tmin/tmax). So can mergedb be modified to treat WMO codes of 0 as ‘missing’? Had a look, and it does check that the code isn’t -999 OR 0.. but not when preallocating flags in subroutine ‘countscnd’. Fixed that and tried running it again.. exactly the same result (crash). I can’t see anything odd about the station it crashes on:

0 -2810 11790 407 MOUNT MAGNET AERO AUSTRALIA 2000 2006 -999 -999.00
6190-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999-9999
2000 339 344 280 252 214 202 189 196 262 291 316 377
2001 371 311 310 300 235 212 201 217 249 262 314 333
2002-9999-9999 339 297 258 209 205 212 246 299 341 358
2003 365 367 336 296 249 195 193 200 238 287 325 368
2004 395 374 321 284 219 214 173 188 239 309 305 370
2005 389 396 358 315 251 182 189 201 233 267 332 341
2006 366 331 314 246 240-9999-9999-9999-9999-9999-9999-9999

.. it’s very similar to preceding (and following) stations, and the station before has even less real data (the one before that has none at all and is auto-deleted). The nature of the crash is ‘forrtl: error (65): floating invalid’ – so a type mismatch possibly. The station has a match in the tmin database (tmn.0702091139.dtb) but the longitude is different:

tmn.0702091139.dtb:
0 -2810 11780 407 MOUNT MAGNET AERO AUSTRALIA 2000 2006 -999 -999.00
tmx.0702091313.dtb:
0 -2810 11790 407 MOUNT MAGNET AERO AUSTRALIA 2000 2006 -999 -999.00

It also appears in the tmin/tmax bulletins, eg:
7600, 070401070430, -28.12, 117.84, 16.0, 00, 30, 30, 407, MOUNT MAGNET AERO

Note that the altitude matches (as distinct from the station below).

Naturally, there is a further ‘MOUNT MAGNET’ station, but it’s probably distinct:

tmn.0702091139.dtb:
9442800 -2807 11785 427 MOUNT MAGNET (MOUNT AUSTRALIA 1956 1992 -999 -999.00
tmx.0702091313.dtb:
9442800 -2807 11785 427 MOUNT MAGNET (MOUNT AUSTRALIA 1957 1992 -999 -999.00

I am at a bit of a loss. It will take a very long time to resolve each of these ‘rogue’ stations. Time I do not have. The only pragmatic thing to do is to dump any stations that are too recent to have normals. They will not, after all, be contributing to the output. So I knocked out ‘goodnorm.for’, which simply uses the presence of a valid normals line to sort. The results were pretty scary:

[snip]

FINISHED.

Stations retained: 5026
Stations removed: 9283

crua6 ./goodnorm

GOODNORM: Extract stations with non-missing normals

Please enter the input database name: tmx.0702091313.dtb
The output database will be called: tmx.0705281724.dtb

(removed stations will be placed in: tmx.0705281724.del)

FINISHED.

Stations retained: 4997
Stations removed: 9318

Essentially, two thirds of the stations have no normals! Of course, this still leaves us with a lot more stations than we had for tmean (goodnorm reported 3316 saved, 1749 deleted) though still far behind precipitation (goodnorm reported 7910 saved, 8027 deleted).

I suspect the high percentage lost reflects the influx of modern Australian data. Indeed, nearly 3,000 of the 3,500-odd stations with missing WMO codes were excluded by this operation. This means that, for tmn.0702091139.dtb, 1240 Australian stations were lost, leaving only 278.

This is just silly. I can’t dump these stations, they are needed to potentially match with the bulletin stations. I am now going to try the following:

1. Attempt to pair bulletin stations with existing in the tmin database. Mark pairings in the database headers and in a new ‘Australian Mappings’ file. Program auminmatch.for.

2. Run an enhanced filtertmm to synchronise the tmin and tmax databases, but prioritising the ‘paired’ stations from step 1 (so they are not lost). Mark the same pairings in the tmax headers too, and update the ‘Australian Mappings’ file.

3. Add the bulletins to the databases.

OK.. step 1. Modified auminmaxmatch.for to produce auminmatch.for. Hit a semi-philosophical problem: what to do with a positive match between a bulletin station and a zero-wmo database station? The station must have a real WMO code or it’ll be rather hard to describe the match!

Got a list of around 12,000 wmo codes and stations from Dave L; unfortunately there was a problem with its formatting that I just couldn’t resolve.

So.. current thinking is that, if I find a pairing between a bulletin station and a zero-coded Australain station in the CRU database, I’ll give the CRU database station the Australian local (bulletin) code twice: once at the end of the header, and once as the WMO code *multiplied by -1* to avoid implying that it’s legitimate. Then if a ‘proper’ code is found or allocated later, the mapping to the bulletin code will still be there at the end of the header. Of course, an initial check will ensure that a match can’t be found, within the CRU database, between the zero-coded station and a properly-coded one.

Debated header formats with David. I think we’re going to go with (i8,a8) at the end of the header, though really it’s (2x,i6,a8) as I remember the Anders code being i2 and the real start year being i4 (both from the tmean database). This will mean post-processing existing databases of course, but that’s not a priority.

A brief (hopefully) diversion to get station counts sorted. David needs them so might as well sort the procedure. In the upside-down world of Mark and Tim, the numbers of stations contributing to each cell during the gridding operation are calculated not in the IDL gridding program – oh, no! – but in anomdtb! Yes, the program which reads station data and writes station data has a second, almost-entirely unrelated function of assessing gridcell contributions. So, to begin with it runs in the usual way:

crua6 ./anomdtb

> ***** AnomDTB: converts .dtb to anom .txt for gridding *****

> Enter the suffix of the variable required:
.pre
> Will calculate percentage anomalies.
> Select the .cts or .dtb file to load:
pre.0612181221.dtb

> Specify the start,end of the normals period:
1961,1990
> Specify the missing percentage permitted:
25
> Data required for a normal: 23
> Specify the no. of stdevs at which to reject data:
4

But then, we choose a different output, and it all shifts focus and has to ask all the IDL questions!!

> Select outputs (1=.cts,2=.ann,3=.txt,4=.stn):
4
> Check for duplicate stns after anomalising? (0=no,>0=km range)
0
> Select the .stn file to save:
pre.stn
> Enter the correlation decay distance:
450
> Submit a grim that contains the appropriate grid.
> Enter the grim filepath:
clim.6190.lan.pre

> Grid dimensions and domain size: 720 360 67420
> Select the first,last years AD to save:
1901,2006
> Operating…

> NORMALS MEAN percent STDEV percent
> .dtb 7315040 73.8
> .cts 299359 3.0 7613600 76.8
> PROCESS DECISION percent %of-chk
> no lat/lon 17911 0.2 0.2
> no normal 2355275 23.8 23.8
> out-of-range 13253 0.1 0.2
> accepted 7521013 75.9
> Calculating station coverages…

And then.. it unhelpfully crashes:

> ##### WithinRange: Alloc: DataB #####
forrtl: severe (174): SIGSEGV, segmentation fault occurred

Ho hum. I did try this last year which is why I’m not tearing my hair out. The plan is to use the outputs from the regular anomdtb runs – ie, the monthly files of valid stations. After all we need to know the station counts on a per month basis. We can use the lat and lon, along with the correlation decay distance.. shouldn’t be too awful. Just even more programming and work. So before I commit to that, a quick look at the IDL gridding prog to see if it can dump the figures instead: after all, this is where the actual ‘station count’ information is assembled and used!!

..well that was, erhhh.. ‘interesting’. The IDL gridding program calculates whether or not a station contributes to a cell, using.. graphics. Yes, it plots the station sphere of influence then checks for the colour white in the output. So there is no guarantee that the station number files, which are produced *independently* by anomdtb, will reflect what actually happened!!

Well I’ve just spent 24 hours trying to get Great Circle Distance calculations working in Fortran, with precisely no success. I’ve tried the simple method (as used in Tim O’s geodist.pro, and the more complex and accurate method found elsewhere (wiki and other places). Neither give me results that are anything near reality. FFS.

Communication as Government Coercion: Futerra and the UK

Posted in AGW Political, AGW Rhetorical by emelks on November 27, 2009

I’ve just completed 64 pages of the intentional word-fuddling called
UK Communications Strategy on Climate Change.” The file I downloaded from Futerra’s website
has a last-edited date of February 15, 2005.

I wound up in this document after reading a reference to it in “Rules of the Game,” a .pdf document included in the leaked .zip file from CRU. It is also on the Futerra website.

The authors state on page 8:

“We must stress that the Climate Change Communications Working
Group commissioned FUTERRA to produce a set of recommendations based upon
rigorous evidence and experience; they are not beholden to accept all of our
recommendations.”

Given the hideousness of this document, one can only hope that all the “recommendations” have been rejected. I’ve not yet had time to determine which have been implemented in the UK, nor whether it has seeped into the US. Many of the “recommendations” are, however, quite familiar as general experience over the past few years and one may be forgiven for assuming that both governments decided to implement Futerra’s plan to force upon their respective populations the falsehoods of “climate change” through the most Orwellian means imaginable.

What does Futerra mean to accomplish through this effort? On pages 8-9:

1 The primary benefit of the attitude change campaign will be in generating
a sense of urgency on climate change, and ‘hooks’ for gaining
acceptance of policy changes.

2 Affecting attitudes on climate change will also help minimise the potential
problems or negative reactions to the social or economic elements of policy
development (e.g. energy price rises).

3 By generating excitement around our potential to acton climate change, existing
behaviour change programmes (of Carbon Trust and Energy Saving Trust) should
find a more receptive audience for their messages.

In other words, it’s a tool to manipulate a population that would otherwise be reluctant to conform to new policy with minimal participation or resistance. That sounds nice, doesn’t it?

4 By providing funds and guidance to local/regional communicators, their
impact will be improved and a host of new channels and audiences for messages
on climate change will be created.

5 This process has already produced results. The Rules of the Game document
is already being used within UK government, and indeed internationally, to
improve the impacts of climate change communications.

Futerra recommends that the government pay selected people to spout the party line on government-funded media outlets.

And they claim that the “Rules” document, itself an Orwellian nightmare, is already in use internationally.

While the entire document contains enough material to produce a doctorate thesis on modern propaganda, I am particularly interested in the media and “voice” sections.

Pg 50: Voice

Do we need a single person to be the ‘voice’ for climate
change in the UK? Or do we need the current voices to be more co-ordinated?

We have considered this issue in depth over the period of the strategy
development. Our commitment to ‘many voices’ for climate change
and the use of social networking must be set against the Rule calling for
a trusted, credible and recognized voice for climate change. In essence, the
recommendations of the Toolkit and Fund section seek to create a host of trusted,
credible and locally recognised voices on climate change. It is the potential
need for a national voice that generates questions.

Our conclusion is that a degree of sophistication is needed to approach
this issue. The challenge of a ‘voice’ is actually a series of
activities that must take place.

Firstly a rigorous internal audit of government and government agency
spokespeople for climate change must be undertaken. A central ‘list’
of who is trained and qualified to speak about what must be compiled. No one
from outside this list should be encouraged to take a public platform without
a clear understanding of their skills and after training to ensure they are
competent in the language and message.

This may seem draconian, but in any large multinational or other media-sensitive
organisation, only legitimate and trained individuals would be used to engage
in crucial debate or profile raising.

Once this list has been compiled a gap analysis of desired skills, expertise
and important issues should be undertaken. If there are any areas (such as
climate change and health, or climate change and security) identified as currently
lacking a spokesperson, then suitable and senior ‘voices’ should
be secured for the list from departments or agencies.

Then the full and comprehensive list of ‘voices’ should be
sent regular updates of key information and have their attention drawn to
potential platforms for their issue.

We believe that this approach will be successful in the short to mid-term.
From our understanding of those high profile individuals that have emerged
in different sectors to ‘lead’ on issue (such as the astronomer
Patrick Moore, David Attenborough, Alan Titchmarsh, David Starkey etc) we
understand they are very difficult to create. We suggest that a ‘watching
brief’ is set for any emerging single voice, from the existing list
or outside it, who can then be given support and profile through the existing
strategy activities.

To sum it up, this “draconian” measure will ensure that no one in any position of governmental authority will be allowed to spread information not to the government’s liking. Additionally, individuals outside of the
government daring to speak differently will be tagged with the “outside” label, will not receive the special government treatment, and will be threatened implicitly or explicitly with all the fears and concerns that a government opponent entails.

Is this what the governing body of a free people does?

Page 53

Media Management

The media are a primary, if not the primary channel for information and opinion
on climate change in the UK. It is critical that a media management plan is
integrated with the other elements of this strategy.

Media coverage of climate change is still relatively niche, apart from
the occasional front-page splash story that emerges in the broadsheets. The
debate must shift in emphasis from the debate about why or if climate change
must be managed through to a more informed debate about how we mitigate and
adapt. Editors also need to be assisted in their understanding of the all-pervasive
importance of climate change across a range of different editorial responsibilities.

“Assisted in their understanding”? Does that translate as, “here’s the line, toe it or else?”

Since when does a government “assist” anyone in their “understanding” of their day-to-day job? In a free country, that is.

Our objectives for the media are to:

• Increase coverage of climate change solutions

• Reduce coverage of climate change detractors

• Encourage more references to climate change in relation to other issues
(health, employment, leisure and the economy)

This final issue is critical. Our overarching vision – the branded statement
– provides a strong framework for linking climate change to the things
that we care about. While increasing coverage of climate change solutions
and decreasing coverage of climate change detractors is important, more important
is to “scatter” climate change on the issues above that get coverage
every day. Making the link between climate change, our lives, our work and
our play will be vital in shifting public attitudes.

This “recommendation” certainly seems to have been taken to heart. Hardly a week goes by that I don’t hear some claim of global warming impact in a totally unrelated news story. A few examples:

Government regulating the color of vehicles

GLOBAL warming will take a toll on children’s health, according to a new report showing hospital admissions for fever soar as days get hotter. The new study found that temperature rises had a significant impact on the number of pre-schoolers presenting to emergency departments for fever and gastroenteritis.

How ignorant do they think we are?

To that end we recommend that:

Press Officer Training is carried out across Government departments,
to maximise the potential for making connections with the climate change agenda

Recommendation 28

A series of training sessions for Government and NDPB communicators and press
officers will potentially pay back big dividends in regard to getting the
climate change message across in the broadest sense. Training should either
be bespoke to each government department/agency or facilitated in a manner
that will allow participants to make the links to their core policy/communications
issues.

We can make the media’s task easier by ensuring that we make the
necessary connections in our press briefings for them. We therefore recommend:

That climate change targeted press releases be issued by all relevant
Government Departments and Agencies (not just Defra), to make connections
with climate change wherever possible

That specialist media should be targeted, to take advantage of the scope
for linking lifestyle and climate change

Recommendation 29

Somehow I don’t think traditional media outlets would mind being used as government puppets. In fact, I think the media would happily engage if asked outright.

This outreach into previously ‘climate ignorant’ territory
should be expanded into the specialist press, where climate change can be
connected to issues of key interest to the target audience, bringing the challenge
home to within their sphere of influence and making it timely and relevant.
Again, this offers the potential for connecting climate change with the issues
the public really care about – health, employment, leisure and the economy
etc. From changing the planting of your garden to allow for climate change
adaptation to cutting the food miles generated by your dinner, the potential
is huge and largely unrealised at present. Gaining a balanced hearing in specialist
media is often easier than through the broadsheets or red tops.

Once again, trying to tie AGW into everything. I broke a tooth a few nights ago—it’s global warming! My son flunked a math test—it’s global warming!

I find it incredible hard to believe that anyone with an ounce of sense would fall for this, but the latest polls indicate that many people do.

Halloween is eleven some-odd months away, but read these two documents anyway. It’s enough to curdle the blood.

Splitting hard science from politics

Posted in AGW Political, AGW Rhetorical, Uncategorized by emelks on November 27, 2009

I’ve been following the climategate news from Steve McIntyre’s mirror site, http://camirror.wordpress.com/. After downloading and digesting some of the documents in the leaked .zip file I started to post pieces I found relevant to Steve’s blog, but Steve is focused on the hard science aspect of the leak whereas I’m more interested–and more qualified–to opine on other aspects of the file.

Rather than litter Steve’s blog with my discoveries, I’ve decided to post them here for anyone interested.