Twitter Phishing…YOUR responsibility!

The recent spate of Twitter ‘phishing’ attacks have been interesting for me in a number of ways. First of all, my wife received one of the phishing DMs from a contact of hers whose account had been compromised. Fortunately, she knew enough not to enter any details in to the page she was directed to, and there was no harm done. A quick change of password just to be on the safe side, and that was that.  Fortunately, she knew enough not to enter any details in to the page she was directed to, and there was no harm done. A quick change of password just to be on the safe side, and that was that.  This particular DM was one that was a ‘social engineering’ attack – an invitation to check a website out to see if the recipient of the DM were featured on that site.  A nice try – after all, most people are interested in finding themselves on the Net!

 

The second point of interest is why the sudden flurry of attempts to compromise Twitter accounts. It’s been suggested that one reason is that the compromised accounts will be used to promote sites in to search engines, based on the recent development of search relationships between Yahoo and Microsoft’s ‘Bing’.  Getting hold of the Twitter accounts would have been the first stage of the operation; the idea would be to automate those accounts to ‘spam’ other users with  other links over the next few weeks to attempt to increase the search engine standing of those links.

But the thing that’s surprised me most is how often people have actually gone along with the phishing request – to enter your Twitter user name and password into an anonymous web page, with no indication as to what the page is!  To be honest, it stuns me.  And it isn’t just Internet neophytes – according to this BBC story an invitation to improve one’s sex life was followed through on by banks, cabinet ministers and media types.  Quite funny, in a way, but also quite disturbing – after all, these are people who’re likely to have fairly hefty lists of contacts on their PCs, and whilst an attack like the one detailed in this article is quite amusing, a stealthier attack launched by a foreign intelligence service against a cabinet minister’s account would be of much greater potential concern.

There are no doubt technical solutions that twitter can apply to their system to reduce the risk of the propagation of these Phsihing attacks.  For example, looking at the content of DMs sent from an account and flagging up a warning if a large number of DMs are sent containing the same text.  Twitter have also been forcing password changes on compromised accounts – again, this has to be a good move.  It might also be worth their while pruning accounts that have been unused for a length of time – or at least forcing a password change on them. 

A further part of the problem is with the use of Link Shortening services like Bit.ly to reduce the length of URLs in Tweets.  This means that you can’t even take a guess at the safety or otherwise of a shortened link;  a link that is goobledegook could lead to the BBC Website to read the story I mentioned above, or to a site that loads a worm on to a Windows PC – or prompts you for your Twitter credentials.  perhaps a further move for Twitter would be to remove the characters in URLs from the 140 character limit.  That way, full URLs could be entered without shortening.

But ultimately a lot of the responsibility for Twitter phishing attacks lies with us users.  We need to bear the following in mind:

  1. If you get a DM or Reply from ANYONE that says ‘Is this you’ or ‘Read this’ form a friend, then to be honest, check with the person concerned to see whether they have sent them.  If you get such a message from anyone who’s not well known to you, then just ignore the message.
  2. DO NOT enter your Twitter username and password in to any website that a link takes you to.  If you do do this, change your password as soon as possible, and don’t use the Twitter password on ANY other system.
  3. Keep an eye on your Followers – if there is someone you don’t like the look of, just block them.  It may seem extreme but it stops possible miscreants ‘hiding in plain sight’.
  4. Ensure your anti-virus and anti-malware software is up to date – this is your last line of defence designed to stop malware that YOU have allowed on to your machine by falling for phishing scams. 🙂

So…play your part in reducing the impact of Twitter Phishing attacks by not clicking those links!

The Last Temptation of Mankind?

halOne of my professional interests is in Artificial Intelligence – AI.  I think I’ve had an interest in the simulation of human personality by software for as long as I’ve been interested in programming, and have also heard most of the jokes around the subject – particularly those to do with ‘making friends’. 🙂  In fiction, most artificial intelligences that are portrayed have something of an attitude problem; we’ve had HAL in 2001 – insane.  The Terminator designed to be homicidal.  The Cylons in the new version of Battlestar Galactica and the ‘prequel’ series, Caprica – originally designed as mechanical soldiers and then evolving in to something more human with an initial contempt for their creators.  The moral of the story – and it goes all the way back to Frankenstein – is that there are indeed certain areas of computer science and technology where man is not meant to meddle. 

Of course, we’re a long way away form creating truly artificial intelligences; those capable of original thought that transcends their programming.  I recently joked that we might be on our way to having a true AI when the program tells us a joke that it has made up that is genuinely funny!  I think the best we’ll manage is to come up with a clever software conjuring trick; something that by deft programming and a slight suspension of disbelief of people interacting with the software will give the appearance of an intelligence.  This in itself will be quite something, and will probably serve many of the functions that we might want from an artificial intelligence – it’s certainly something I find of interest in my involvement in the field.

But the problem with technology is that there is always the possibility of something coming at us unexpectedly that catches us out; it’s often been said that the human race’s technical ability to innovate outstrips our ethical ability to come up with the moral and philosophical tools we need to help our culture deal with the technical innovations by anywhere from a decade to 50 years; in other words, we’re constantly trying to play catch up with the social, legal and ethical implications of our technological advances.

One area where I hope we can at least do a little forward thinking on the ethical front is in the field of AI; would a truly ‘intelligent’ artificial mind be granted the same rights and privileges as a human being or at the very least an animal?  How would we know when we have achieved such a system, when we can’t even agree on definitions of intelligence or whether animals themselves are intelligent? 

Some years ago I remember hearing a BT ‘futurist’ suggesting that it might not be more than a decade or so before it would be possible to transfer the memory of a human being in to a computer memory, and have that memory available for access.  This isn’t the same as transferring the consciousness; as we have no idea what ‘conciousness’ is, it’s hard to contemplate a tool that would do such a thing.   But I would accept that transferring of memories in to storage might be possible and might even have some advantages, even if there are ethical and the ultimate in privacy implications to deal with.  Well, it’s certainly more than a decade ago that I heard this suggestion, and I don’t believe we’re much closer to developing such a technology, so maybe it’s harder than was thought.

But what if….

In the TV series ‘Caprica’, the artificial intelligence that controls the Cylons is provided by an online personality created by a teenage girl for use as an avatar in cyberspace that is downloaded in to a robot body.  In Alexander Jablokov’s short story ‘Living Will’   a computer scientist works with a computer to develop a ‘personality’ in the computer to be a mirror image of his own, but that won’t suffer from the dementia that is starting to affect him.  In each case a sentient program emerges that in all visible respects  is identical to the personality of the original creator.  The  ‘sentient’ program thus created is a copy of the original.  In both Caprica and ‘Living Will’ the software outlives it’s creator.

But what if it were possible to transfer the consciousness of a living human mind over to such a sentient program?  Imagine the possibilities of creating and ‘educating’ such a piece of software to the point at which your consciousness could wear it like a glove.  From being in a situation where the original mind looks on his or her copy and appreciates the difference, will it ever be possible for that conscious mind to be moved in to that copy, endowing the sentient software with the self awareness of the original mind, so that the mind is aware of it’s existence as a human mind when it is in the software?

Such electronic immortality is (I hope) likely to be science fiction for a very long time.  The ethical, eschatological and moral questions of shifting consciousnesses around are legion.  Multiple copies of minds?  Would such a mind be aware of any loss between human brain and computer software? What happens to the soul?

It’s an interesting view of a possible future  for mankind, to live forever in an electronic computer at the cost of becoming less than human?  And for those of us with spiritual beliefs, it might be the last temptation of mankind, to live forever and turn one’s back on God and one’s soul.

The PAYG Laptop?

You write one article about Appliance Computing and the following morning this BBC story pops up – Laptop Launched to aid computer novices’.  The ‘Alex’, a Linux based laptop, is aimed at people who’re occasional computer users and comes with an Office suite, mail, browser, broadband connection and a monthly fee.  In other words, a PAYG laptop.  There’s nothing new about this; a number of Mobile Phone Companies offer mobile broadband access packages that include a Windows laptop, and in the recent past there have been a few occasions when companies have attempted to launch similar schemes, sometimes backed with advertising.

I say attempted, because they’ve tended not to work, and I’m not at all convinced that this one will be any more succesful.  The company’s website describes the package available here,and to be honest it does seem rather over-priced for what is a modified and stripped down Ubuntu distro.  And one that seems to only work when your broadband connection is running.  It’s a good business model provided that you can get people to buy in to it.  There’s a review of the package to be read here.

Now, first question – who is the market place?  The Broadband company who’ve developed this package claim that almost 25% of people in the UK with computers don’t know how to use it.  really?  That I find difficult to believe.  Most folks I know – across the board, non-techies, techies, old, young, whatever are quite au fait with using their computer to do what they want to do.  There may be aspects of computing that they don’t get, in the same way that I don’t ‘get’ iTunes, for example, or the intricacies of computer or video gaming, but I know no-one who’s bought a computer who doesn’t make some use of it.  Perhaps that 25% didn’t really want a computer, or have ended up with one totally unsuitable for them?

If the market sector is this 25%, then what proportion are willing to buy a £400 computer and a £10 access fee?  Apparently a ‘sofwtare only’ option that can be installed on older computers and that will simply cost you the monthly fee is out in the next couple of months, which might allow people with older computers to make use of them.  the package comes with 10Gb online storage; does this mean that local storage is not available?  If so, what happens to your data if you don’t pay your monthly fee or cancel your subscription?  To be honest, that sounds like something of a lock-in akin to Google Docs.  According to this review, on stopping the subscription, the PC effectively ‘expires’ – along with the access to your data.

I’m afraid that from what I can see I’m not impressed with either the environment or the limitations offered.  One of teh things that you learn after a while in putting together user interfaces is that people who come in knowing nothing soon gather skills and in some cases start finding the ‘simple interface’ that originally attracted them to be a limitation.  With a standrad PC, you just start using more advanced programs and facilities; with something like the Alex you’re stuck with what you’re given.  And whilst you could just buy a PC, and ask someone to set it up ‘simple’ for you (to be honest, it isn’t THAT difficult with a Windows PC, Mac or Linux machine if you ask about) and use a more ‘mainstream’ machine, you’re still stuck with your data being locked in to the Alex environment.

The solution to this problem is perhaps to look at front ends that sit on existing platforms, rather than work to further facilitate the move towards a computer appliance future split between a large number of manufacturers who lock us in to proprietary data stores.

The Appliance Computer?

ipadWell, the fuss over the launch of the iPad has died down somewhat – it wasn’t the Second Coming or the Rapture, the world didn’t suddenly turn Rainbow coloured (not for me. anyway) and the Apple Fans have gone quiet.  So, perhaps it’s time to take a few minutes to think about what the iPad might mean in the future.   This is an interesting viewpoint – that the iPad could be the first step on the road to the computer as a true ‘appliance’. 

In some ways, this might not be a bad thing – after all, it’s the way that all technology has tended to go over the years.  Take for example radio – the first radio receivers required the operators to be reasonably knowledgeable about the equipment, and in some cases be able to build and maintain their own equipment.  Radios required large outside aerials, and I clearly remember a ‘Home Maintenance’ book that my mum had that dated from the 1920s that had great amounts of information about how to service your wireless were it to go wrong.  By the 1930s they were more self contained ‘black boxes’ – OK, self contained walnut wood boxes – and by the time we hit the 1970s little radios were being given away as children’s toys.  We’re moving along that path with computers; when home computers first became available then you were expected to want to write some of your own programs or even build the machine, then published software came along, then we have the time we’re in now when very few people write their own software at all. 

But the thing is with contemporary computers is that you can still write your own software if you wish to; you can go out, buy a copy of VB.NET, download Python or PHP or Java and with some application write your own software.  And if your computer doesn’t support media you want to view or listen to, you can just get a piece of software installed that will do the trick.  And if you want it to do something totally new, you can again find an application somewhere, or write your own, or commission someone else to write it for you – all without fear or favour.

If computers follow the logical progression, then we could expect to see them move on to a stage of development where they are pretty much ‘closed units’ – the old joke of ‘no user serviceable parts’ will be very applicable.  Think of the computer of tomorrow as being a little like your smartphone or a digital TV with Satellite TV and a DVD recorder built in; there’s content for you to view, you can save it, there may be services to buy, but you’re not going to be able to add functionality to it by producing your own code or content to run on it.

In other words, surprisingly like an iPad.  And some analysts have noted that the apparent lack of expandability of the iPad might not be a design omission, but might actually be a deliberate design policy.

Producing computers that are simply glorified media players has a number of advantages for many parts of the hardware and content industries.  To start with, if you can totally control the hardware and software environment then you can restrict your support calls; many software houses that produce applications for Windows have to have reasonable support functions in their companies because whilst their software runs on Windows, each PC running Windows is to a great degree unique, and therefore offers a near unique environment on which the application runs. 

A further point is that once you stop people from being able to put their own software on these machines, then you also prevent a lot of the issues of illicit copying.  By controlling the platform you can control the way in which the platform handles content that might be protected by some sort of Digital Rights Management software.  Indeed, it’s not too difficult to imagine a situation in which the functionality available on the unit can be remotely enabled and disabled  based on the payment of licenses or rental fees – similar to the way in which satellite TV receivers can be activated or de-activated remotely.

The Appliance Computer has a lot to offer manufacturers and content providers; it locks users in; it protects content; it makes the equipment more reliable.  But it also eats away at the very foundations of what has made so many software applications possible – the ability for anyone to write their own software.

Don’t let Appliance Computing remove the freedom to compute.

UK Government Data Release – much ado about nothing?

Back in January the UK Government opened a web site up that was described as “a one-stop shop for developers hoping to find inventive new ways of using government data”.   The site, http://data.gov.uk/, aims to pull together government generated data sets in a form that application developers can use to create ‘mashups’ of data from different sources of public and private data, create map based information from the data, etc.  In other words, the idea if to open up public data for private use.

I was pretty excited; professionally I’ve used some public data in the past and acquiring it is usually quite hard going.  Even if you know where to find the data, it’s not easy to just grab and download, and then it comes in various formats that need pre-processing to make useful.  So, I was pretty excited when I heard about this project.  I wouldn’t go so far as to say that my nipples were pinging with excitement, but there was definite anticipation.

So….my thoughts.  Bottom line for me at the moment is ‘Sorry chaps, sort of getting there but there’s a long trail a-winding before you reach your goal’.  Now, this may sound rather churlish of me, but allow me to explain….

Nature of data

First of all, a lot of the data on the site has been available in other places before now – however it is at least now under one roof, so to say.  The data is also available in disparate formats, like CSV files, etc.   The data is also pre-processed / sanitised – depending upon how you want to view it.  In some cases the data is in the form of Spreadsheets that are great for humans but dire for automated processing in to mashups.  The datasets are not always as up to date as one might expect; for example, on digging through to the Scottish Government data, I found nothing more recent than 2007.

Use of SPARQL and RDF

Although the SPARQL query language has been implemented to allow machine based searching of the site to be done, the data available via this interface seems to be pretty thin on the ground AND, to be honest, I’m not sure that the format is the best for the job.  SPARQL is a means of querying data that is represented in the RDF format to search what’s called the ‘Semantic Net’ – a way of representing data on teh Internet that is more easily made meaningful to search tools. But for a lot of statistical data, this isn’t necessarily the best way to search for data,  and the SPARQL language is not widely used or understood by developers.

No API

There’s no API available such as a Web Service to get at the data.  The site acknowledges this and states :

“The W3C guidance on opening up government datasuggests that data should be published in its original raw format so that it’s available for re-use as soon as possible. Over time, we will covert datasets to use Linked Data standards, including access through a SPARQL end-point; this will provide an API for easy re-use.”

I think this is a rather facile argument.  Apart from the data not being that up to date, one can surely publish the contentof the data raw – i.e. no numerical alterations – whilst still making it available via a SOAP, JSON or other similar API that more developers might have experience of and access to.   As it stands it just seems that some of the time spent on this project could have been spent in getting the data in to a format that could be served up in a consistent format to a wider range of developers.

This current interface – wait for the heresy, people – may be wonderful for the Semantic Web geeks amongst us BUT for people wishing to make widescale, real use of the data it’s NOT the best format to allow the majority of non bleeding-edge developers to start making use of the data available.

Summary

This is an early stage operation – it is labelled ‘Beta’ in the top right of the screen, and as such I guess we can wait for improvements.  But right now it just seems to be geared too much towards providing a sop for the ‘Open Data’ people rather than providing a widely usable and up to date resource.

Google Buzz and Google’s incursion in to Social Networking

GoogleMany years ago there was a joke in techy circles that likened Microsoft to the Star Trek aliens ‘The Borg’.  It appeared at the time (mid 1990s) that Microosft were indeed determined to assimilate everything they encountered and absorb the technology of other companies in to their own.  Well, like the Borg in Trek, Microsoft finally found that they couldn’t assimilate everything.  But today there’s a new Borg Queen on the block, in teh form of Google.

Google Buzz was launched as an adjunct to Gmail, and Google got themselves in to hot water at the launch by having the system automatically follow everyone in your Gmail contacts list.  This was regarded as pretty heavy handed on Google’s part – and Google obviously concurred to some degree as they introduced changes to this part of the system.  The problem for Google is that they have a lousy history of handling privacy issues in both their Search tools and Gmail, and I guess starting a new product off with a similar disregard for the perceptions of their users was not a sound move.

So, how relevant is this move by Google?  I have to say that I’m not convinced that Google will actually represent major competition to Facebook or Twitter with Buzz (or, for that matter, with Wave).  The lock in to Google’s infrastructure of Buzz is something that Facebook doesn’t have, for instance.  I don’t have to have a Facebook email account, and I don’t do my searching through Facebook.  And therein lies the problem for me – and it all comes back to Google’s database of intentions that I’ve mentioned before in this blog.  The more Google can derive about the way in which people use Search, who they interact with, what ‘clusters’ of interests people have – even anonymously – the more value Google’s database of intention is.  You might want to take a look at some of my previous articles about Google – Google and The Dead Past, The importance of Real Time Search and Google seeks browser dominance – to get a feel for my views on Google.   Google’s strategic moves have been consistently to get Google’s search into everything we do.  Gmail was their first crack at this with personal communications, and now with Wave and Buzz they have the tools to map social networks, and the search behaviours of people on those social networks, especially if people remain logged in to Google accounts whilst the do their searching.

Let’s pretend…..you are logged in to your Buzz account and you search for something.  Google can link your search interests to those of the people in your social network, and vice versa.  They can thus add the collective behaviour of your searches to their database of intentions – remember what I said about the Borg? 🙂  And we’re not even thinking about the additional data provided by Google Apps…

 Google are also purchasing a ‘Social Search’ tool that allows people to ask questions of their social groups; I think we can safely assume that the responses will be squirreled away somewhere for future use.

Even when anonymised, this sort of information builds in to a very valuable commodity that Google can sell to future ‘partners’.  Google’s behaviour at the moment seems to be to develop or acquire a series of discrete elements of Social Networking technology that they’re bringing together under the existing account system of Gmail / Google Accounts, which makes perfect sense.  At one time Microsoft filled in some of the gaps in their various offerings in a similar way to allow them access to market segments that they were still trying to penetrate.  Perhaps Google have learnt from the software behemoth.

But they have a way to go – here are what I consider Google’s biggest challenges.

  1. The attitude of the public towards Google is not entirely positive, and whilst Facebook have had numerous privacy problems their defined market presence in Social Networking and not in Social Networking, Search, Email, Productivity tools, kitchen sink manufacture, etc.  
  2. Facebook may easily lose market share to a good competing service; their constant re-vamping of User Interface and buggy code upsets users but at the moment there is no viable competation for most people as Facebook is where their social network is.  Google would have to get people to migrate en-masse and over a short period of time to get the sort of success FB show.
  3. Wave is certainly buggy; Gmail and Buzz are designed to not run on IE6 and it’s debatable how long Google will support other Microsoft Browsers – I wonder how many people would want themselves tied in to Google at the level of software as well as applications?  Like I said earlier – Facebook doesn’t require me to have a Facebook email address.
  4. What’s Google’s target market; Wave seemed to be a solution looking for a problem; Buzz seems to be a similar ‘half way house’ affair that in some ways would have been best placed in Wave. Twitter and Facebook tend to provide specific groups of users with a defined user experience and functionality.  Quite what Buzz and Wave and Gmail together provide that isn’t available elsewhere is not clear to me.

So….my thoughts?  If this is Google’s attempt to park their tanks on Facebook’s lawn, then they’ve invoked the ‘Fail Whale’.

We know where you’ve been on the Net, and we don’t need no steenkin’ cookies!

searchglassI’m not overly paranoid about people knowing where I’ve been on the Internet; I’m aware that it’s pretty easy for a website to feed your browser ‘tracking cookies’ that can be used for marketing and advertising purposes, and these can then be picked up on other sites, thus providing a path of footsteps that you have followed online.

Which is why I clear my cookies regularly, and set my browsers to only accept cookies from sites that I want to accept cookies from.  But I can see that in some parts of the world, your browsing history might be of great interest to Government and Law Enforcement, and I’m sure that many of the larger online retailers would love to get their paws on a good, reliable and hard to circumvent method of looking at what common interests people have.  For example, even if you’re anonymous, it can be of great use to companies to know what sorts of sites you visit, because you can then use data mining techniques to derive information on what other sites or products you might be interested in.  For example, if you’re an Amazon user, you’ll be aware of the fact you get recommendations of the ‘We see you’re interested in x.  Other people interested in x also bought y and z’. 

Now…let’s take this a little further.  I was browsing around the other afternoon and came across this site.  Give it a try – it’s under the auspices of the Electronic Frontier Foundation.  I don’t know what it came back with for you, but my ‘fingerprint’ was pretty darn rare – I guess it’s inevitable because of the various things I have installed on this  computer for work.  The site looks at the information sent by your browser to the site, and uses it to derive a ‘uniqueness’ factor – a sort of tag.  For an out of the box installation of an Operating System then I’d expect that there would be quite a few people whose finger prints are essentially the same.  But the more you tweak and configure and install stuff on your PC, the more unique it gets….to a point at which it can identify your PC uniquely, with very few errors.

And all this without it ever putting a cookie anywhere near your PC.  Now, there are ways around this – there always are – but they’re not the sort of approaches that the average man or woman in the street would take.

So what sort of ‘advantage’ would such a technology offer online companies, Government and the Security Services?

Now, this is pure supposition – I have absolutely no evidence at all that this is happening or is likely to happen…but let’s pretend.  We’ll assume that a number of large online companies have collaborated on sharing this fingerprint data – basically you visit a site or even a page – or maybe even do searches for certain subjects – and your electronic fingerprint is tagged on to that fact.

Scenario 1.  You do a search for information on equipment to help you avoid speed cameras.  Later that day you go to buy car insurance.  The insurer does a quick check on your ‘fingerprint’ against topics of interest to it – including sites offering legal advice for people caught speeding and also sites that inform or advise on speed traps.  You show up – you’re declined.

Scenario 2.  You’re interested in computer hacking – maybe even researching a book.  You visit a number of sites of interest, look at books on Amazon and such.  A few weeks later a major ‘hack’ happens and the authorities look at the electronic fingerprints of everyone who may have researched the topic.  You will show up.  This fingerprint is then circulated around ISPs who note that it is one that is associated with your Internet account.

Scenario 3. You’re gay in a country run by a repressive regime.  You visit web sites where the fingerprinting is being done for commercial marketing reasons.  The security services of your country get hold of this data, either by buying it or stealing it, and run a check of those fingerprints against the ones that are on file with the ISPs of that country.  You will find yourself in major trouble.

There are ways around this technique – it’s easy to go through proxies, and possible to strip all this sort of identifying data off of the packets that go to web sites.  And people who’re genuinely worried (or have reason to avoid this sort of inspection) will no doubt be doing this.  But for the vast majority of people this simply would be yet another means of intrusion in to our private lives.

Where next for space?

nasa-ares-cone-cp-7563350I have to admit to being quite saddened by President Obama’s announcement cancelling the NASA project to carry out manned missions to the moon.  Perhaps it’s something to do with my age; I can remember the Apollo moon missions as a schoolboy, along with the feeling that by 2001 we might actually have a world something like that portrayed in the movie 2001.  And then Star trek couldn’t possibly be too far behind.

Well, to paraphrase the old Apple advert ‘2001 wasn’t like 2001′.  The public lost interest, the 70s happened and the money ran out.  Governments felt that there were more politically and economically pressing concerns, and space flight became very much a science-driven, unmanned affair with the exception of the Shuttle.  And now we have Ares / Orion being cancelled for being over-budget, late and reliant on old technology, and one wonders whether we’re going through the same sort of thing again.

For a President who is supposed to have vision, I’m afraid that Obama is currently looking pretty short sighted on this one. I totally appreciate that the US (like the rest of us) has some significant budgetary issues to deal with.  But the ongoing silliness of bailing out banks and large businesses does nothing for national confidence or vision.  It benefits bankers, stockholders and the wealthy.  It reminds the rest of us across the world that we’re regarded as the ‘bank of last resort’ – you can always whip the public purse for a bit more money.  Sure, we can look up to the skies, but all we’ll be seeing are the towering office blocks of the unacceptable face of capitalism that got us in to this mess.

Now, more than any other time in the last 30 years, we need a lead from the US in terms of something with vision and the potential for achievement for all mankind.  The human race enters the 21st Century full of fear – of terrorism, economic crash, ecological disaster.   Whilst it’s definite that renewed and invigorted investment in space exploration won’t do anything immediate to resolve any of these problems, it offers the possibility of enhanced technical fixes for the future, the possibility of a unified global approach the universe beyond our atmosphere and,  probably most important of all, a sense of hope.

We have no heroes today; we have no explorers of world renown that show that the human race is indeed made of ‘the right stuff’.  We’re expected to pay our taxes which currently go to pay for bailing out banks and repaying national debt foisted on us by feckless governments who’re trying to Govern the world of today with the attitudes of the mid-20th Century.  We have Governments mired in control-freakery, who look towards ‘wars on terror’ and ‘wars on drugs’ as means of burying large amounts of economic production that might otherwise be used in more ambitious and adventurous ways – like space exploration.

Mars is the obvious next destination; I hope that the cancellation of Moon based projects is regarded as a facilitating move to allow more resources to be spent on Mars missions.  But my gut feel tells me that sooner or later that too will be dumped.  I understand that there are issues to deal with here on earth, but given the ability of the Governments of the world to bail out banks, develop massive and never ending weapons programmes and generally attempt to support the any number of flawed and expensive policy programmes that fail to deliver year on year, the resources are available. 

It’s just that Governments choose not to support programmes and policies that give their people hope – and not just in space.  And that is saddening and infuriating in equal measure.

Twitter – voluntary spam?

twitter-logoIn a recent article, it’s been suggested that Twitter is becoming a major route for spammers to peddle their wares.  This seems to be a feature emerging of all social networks right now, but in today’s piece I want to focus on Twitter.  The view expressed in this article is pretty strong – probably an even more extreme position than I take with regard to spam on Twitter, but it’s worth looking at twitter from the perspective of whether we are participating in a network that is becoming more spam than good quality ham?

As is suggested in the article above, the relationship we have with the people we follow is rather different to the relationship we have with people who email us spam – on the whole the folks who send us wonderful offers of Viagra and millions of dollars on the Beserabian national lottery are unknown to us (and probably to any other human being on the planet).  With Twitter, we’ve actually accepted the folks we follow as followers, and when one of those followers does transgress our own definition of ‘spam’ and send us a message we regard as inappropriate, as well as it being annoying there’s also a sense of betrayal of trust to a greater or lesser degree.

When we interact with people on twitter, there are two relationships involved; people follow us, and we may follow then.  We only see their content if we choose to follow them.  This is why I’m extremely careful in who i do follow.  But even then, if someone who normally posts sensible stuff posts a couple of sales messages over a few days I’m not going to break my heart about it – I get more upset by people who ‘flood’ twitter with lots of posts one after each other or who repeat posts at frequent intervals.  Such content is much more intrusive, in my opinion.  They tend to be the ones who’re more likely to get put off my list of followed people than someone who sends me the odd sales message.

The other interaction is who follows us; until we also follow them, we don’t see any content that these people put up, but I’m equally protective of who I allow to follow me, based on the real world aphorism of being judged by the company that you keep.  This is why I regularly screen people following me and block folks who’re obviously doing little else than selling, obvious ‘iffy’ accounts, etc.  It’s important to do this as far as I’m concerned because I don’t want any of my other followers following someone based purely on the idea of ‘If they’re following Joe they must be OK!’  Just a quick look at the profile and Tweets of some some folks immediately indicates to me that they’re

The ‘silent spam’ we tolerate by allowing folks to follow us who’re ‘bad’ but whose posts we don’t notice is just as bad as the noisy spam that we’re aware of on a day to day basis.  In my view I regard this sort of spam as the true voluntary spam, and as members of the Twitter community we should all be blocking and where appropriate reporting these folks, even if we are not following them ourselves.

The next, next, next thing!

Hands up whoever has heard of the Red Queen’s Race?  That was the athletic event in Wonderland where the participants had to run very hard to stay exactly where they were.  I’m becoming convinced that we’re entering in to that sort of event in the online marketing and PR world – and probably beyond as well.  And it worries me.

The article that sparked this off is here – nothing major, really, but it did get me thinking.  Does anyone ever give any online or software technique any realistic time to show whether it can deliver the goods anymore?  Or is it all a case of ‘MTV Attention Span’?  Does everything have to prove itself within a 30 second elevator pitch?  If something does the job, does it effectively and meets whatever targets are set for it, why do so many people jump ship as soon as the ‘next, next thing’ comes along? 

There seems to be no scope today for a technique or technology to get time to prove itself.  Of course, there are going to be some advances that are just so awesomely great that it’s obvious even to a relative techo-Luddite like me that they’re worth using immediately, but for other things, how can you know whether you can get more out of an upgrade when you probably haven’t even measured the value of your current process?  If you’re using online tools like Facebook, Twitter, Search Engine Optimisation to market your business, then do you actually know how much business comes to your site via these various channels?  Because if you don’t then simply changing techniques to fit with the current ‘fad’ is likely to be a waste of time; you simply don’t know whether the new tool is worse or better than the old one!

Impatience with results from all online marketing methods has always been an issue; people still seem to think that making quick money is posisble on the Internet; I’m afraid the only way to do that is probably to sell people on the Internet ‘Get Rich Quick’ schemes!  But flicking from one technique to another and then to another without giving time for them to work or even knowing whether they ARE working is pointless.

So…my advice?

Well, bearing in mind that I am certainly NOT a marketing expert and not a millionaire, all I can say is apply good, sound, marketing techniques, such as:

  1. Measure your traffic to your site or business before you start, using a metric that matters – whether that’s page impressions, money earned, downloads made, whatever suits your business.
  2. Introduce new marketing channels in such a way that business from them is identifiable.
  3. If your business is cyclical in any way, let new techniques run for at least a fair part of that cycle.
  4. When you have your baseline, make changes to the channels one at a time and measure any effects based on those changes.

Just remember the old adage that you cannot manage what you can’t measure; just because the technology changes doesn’t mean that common sense approaches to marketing should change as well.