Some questions for the "5 Eyes" countries on what they think they can doÉibhear, 2018-09-04 20:11:00 IST As one commenter put it, "Here we go again". There's something up with the so-called "Intelligence Community" that convinces them that there is a technical solution to the social and political problem that exercises them the most these days. And, what is that problem? It's that people talk to each other, and often1 want to do so in private. In a Statement of Principles on Access to Evidence and Encryption, the 5-eyes partnership – the U.S.A., the U.K., Australia, New Zealand and Canada, working together to share all the "signals" (a.k.a. internet) data they capture – have again stated that internet service providers must "assist authorities to lawfully access data, including the content of communications" if the law requires it. This is all fine, but given the history of the requirement, and the ways it has been stated over the years, it's clear that this is code for their belief that the service providers must only provide encrypted communications that law enforcement and other security types can spy on2. It has been pointed out countless times by people far more informed than I that what they seek just can't be provided safely. I am firmly of the belief that this requirement is effectively impossible, and attempts to put it in place will ultimately harm innocent people. Here's a little experiment… Social Gibiris is an instance of the microblogging system called GNU Social. I operate that instance. Social Gibiris is a node in a federated group of similar systems that might be as large as thousands – or maybe tens of thousands – of nodes. These federated nodes use internationally-agreed information-sharing protocols so that a post on one node is made available to users on another if they wish it. Not all of these nodes, or computers, run GNU Social. Other alternatives that speak the same information-sharing protocols include Pleroma, Mastodon, postActiv and many other software packages. However, they can all shares messages with each other, and no one organisation or entity is in control of this environment. As the information-sharing protocol is freely available, anyone can write a new system that can be plugged into the federation of nodes, and – if that person chooses to do so – the code for that new system can be shared with anyone else for them to create their own node. At present, GNU Social, which Social Gibiris uses, doesn't support end-to-end encryption of messages. Therefore, the 5-eyes group of nations doesn't have to worry about trying to crack those messages – they're there for all to see. But, I have posted an encrypted message on my own instance. Only one person can read the decrypted message, and I'm not saying who that is. That person doesn't even know that the message is there. It's not important. That message is now available on other micro-blogging nodes that mine is federated with. By my act of posting it on my instance, Highland Arrow has it, so has social.heckin.tech, it's on gnusocial.net and – while it's still in existence – it's on Quitter.no3. Heck, it's even accessible on twitter! The one person who can decrypt that message can do so by retrieving it from anywhere it has been federated to (except twitter, which doesn't support messages longer than 280 characters, and mine is longer than that). What is important is that I posted an encrypted message to my micro-blogging system that doesn't support encryption, and the 5-eyes countries can't do anything about it. Or… can they? If the answer to any of the following questions is No, then it's unlikely that they can. Can the 5-eyes countries break the encryption method I used to encrypt that message (PGP)? Can they break the other encryption methods that are currently considered "strong"? Can the 5-eyes countries criminalise the use of that encryption method, or any other encryption method that they can't break? Can they force all the other countries in the world to follow along with such a criminalisation? Can the 5-eyes countries really force all the large, global internet platforms to use weakened encryption for the messaging services they offer their users? If they were successful in forcing all the large, global internet platforms to use weakened encryption, can they force all the world's internet users to use only those platforms? Can they prevent users of these large, global internet platforms from first encrypting their messages themselves using strong encryption (i.e. just like I did with my encrypted message on Social Gibiris)? Can the 5-eyes countries stop small software development communities from developing alternatives (e.g. GNU Social, postActiv, etc.) to the weakened systems operated by the large global platforms? Can the 5-eyes countries stop small software development communities from developing alternative encryption methods in the event that the currently-strong methods become weak (e.g. by some new methods being developed to defeat them)? Can the 5-eyes countries genuinely prevent all forms of secret communication, known now, or yet to be invented, from being used by all 7.5+ billion people in the world? To that last question, I have two non-Yes/No follow up questions: How much would that cost? Do we really think it would be worth it? One of the regular arguments against letting governments "legally" spy on encrypted messages is that once that has been facilitated for governments we like, it will also be facilitated for governments we don't like (and also for other bad actors like, for example, organised crime gangs that want to see our banking transactions). The reasoning here is that just like a car, or a screw-driver, or a shoe, an encryption method knows nothing about the intention of its user, and once it's weak, it's weak for all users; good, bad or indifferent. However, that's not my point these days, even though I've made it in the past, and even though I believe it to be compelling. My point these days is that all this effort WILL NOT RESULT IN WHAT THE 5-EYES COUNTRIES PRETEND THEY WANT. And I think that's more compelling. But, what do I know? If you can refer me to a good analysis that suggests I'm wrong, please do so. I can be contacted at @firstname.lastname@example.org, @email@example.com or at my e-mail address, which can be seen at the bottom of this web page. I would genuinely love to hear what there is out there that might prove me wrong. Footnotes: 1 Often?! Very nearly all the time! 2 Their own law enforcement and security types, no doubt. If it's a legal requirement in China, too, I'm sure the NSA and its ilk wouldn't be massively keen to know that Mr. Trumps's WhatsApp messages could legally be decrypted over there! 3 Depending on when you're reading this, quitter.no is shutting down tomorrow, or did so on the 5th September, 2018. Post-script on why you should stop trying to stop people from using encryptionÉibhear, 2018-08-22 13:40:00 IST Quick summary: Forcing internet messaging services to permit only weakly-encrypted communications so that governments can access them easily, will not impact bad actors, but will seriously impact innocent people who want to obey the law. Read further for my explanation. In my previous post, I outlined the abject pointlessness of governments' attempts to force service providers to weaken the encryption used on their messaging services. The government of Australia is embarked on just such a foolhardy course of action. Whether you agree or not that "something has to be done", this is guaranteed to be a total failure no matter how far the initiative is taken. The point of my post yesterday is essentially a combination of the following three facts: It is possible for me to write an encrypted message, to print it out, to put it into an envelope and then the postal system, and for the recipient to do what needs to be done to decrypt that message. No matter how easy it is to open that envelope while the letter is in transit1, the message will still be unavailable to anyone who wants to examine it, because it's encrypted. In exactly the same way, anyone who cares to do so can send an encrypted message to anyone else over an electronic communications channel that isn't encrypted (e.g. e-mail), or one that uses a deliberately-weakened encryption (e.g. WhatsApp, etc. if some so-called democratic governments get their way). The financial cost of being able to do this is tiny: it can be done using the standard "smart 'phone" people carry around in their pockets, and using freely-available software tools. There are 7.5 billion people in the world. If even 1% of 1% of them have the really, really bad intentions that governments believe these measures will stop, that gives us 750,000 people who have an interest in investing the tiny amount of money and effort to get around these weakened encryption schemes. This assumes, of course that at least one of the existing strong encryption methods – of which there are very many! – remains inaccessible to government organisations. If all of them magically become accessible to these government organisations, of the 750,000 really, really bad people interested in circumventing the weakened encryption methods, there are likely enough people in there to develop yet another algorithm that they will be able to exploit. Let's assume that Australia, or the U.S., or some other government ultimately succeeds in getting all the internet service providers to stop providing encrypted messaging2 services, who will be affected? Innocent people going about their private business. That's who. And, perhaps, some low-grade, stupid baddies. Any baddie worthy of the description will avail of the obvious workarounds I outline above. However, people communicating with their banks, lawyers conversing with their clients, charity organisations working to support people in oppressive countries will all have their communications capabilities compromised by these laws, because they will all want to be compliant. Those who don't care about obeying the law will continue to communicate with each other in illegal ways, and it won't matter to them that they have to break yet one more law to do so. A message to politicians: Congratulations for getting this far. Please use the logic of the technology: Technology doesn't know or care about the motives of the people using it. Saying that innocent people won't be affected by a legal ban on effective encryption is patently untrue. But it's worse than that: non-innocent people definitely won't be affected; they will easily route around such a ban. If you are truly interested in the democratic principle of freedom, you will permit innocent people going about innocent activities to speak privately with whomever they wish, and you will push for more sensible measures, which will have to include investing in police and security forces for them to improve their capacities to use the available, effective investigative methods, and to allow them to develop new methods that don't infringe on the rights of innocent people! Footnotes: 1 Or, if I just leave the envelope unsealed 2 I'm now sick of writing variations on "weakened encryption"; simply put: if it has been weakened, it's not encryption. You just can't stop people from using encryption, so stop trying toÉibhear, 2018-08-21 09:00:00 IST UpdateI've added another post, intended to be read after this one, outlining how not only will government efforts to reduce the usage of encryption not work for its intended purpose, but that the real affect will be to innocent people doing innocent things. There is yet another story in the news about how the US Government is trying to compel a voice messaging service to break the encryption on the service for the purposes of an investigation. There are many reasons why this is a bad idea, and many explanations as to how investigative bodies like the FBI could get around such problems. I'm just going to cover one issue here: it's pointless. There is a very long (and growing) list of strong encryption algorithms. There are 7.5 billion people on the planet. The cost of a computer (including the software required) to develop a system that uses any one of those algorthims – or to develop a new one – altogether can be as low a €100-€200 (or lower), and the cost to rent one from one of the cloud hosting providers can be as low as less that €1/day. Encryption is just mathematics, so a new algorithm can be invented using a pencil and paper anyway. There is no government in the world that can stop all those 7.5 billion people from availing of the option to spend that small amount of money to develop such critically useful software. Telling Facebook that it can't use strong encryption doesn't stop Google from using it. Telling Google and Facebook that they can't use it doesn't stop Microsoft. Telling Facebook, Google and Microsoft doesn't stop Amazon, etc. Telling all companies based in the US that they can't use strong encryption doesn't stop companies in Canada from doing so. And then there's France, and New Zealand, and Russia, and Zimbabwe and all the other countries in the world. Even if you could achieve the ridiculous "ideal" of having all countries in the world pass legislation to ban strong encryption, do you really think that subversive civil liberties groups or organised criminal groups would just stop there? Over every communications channels can be built another. For example, it's possible to put an encrypted message into a sealed envelope. Some might think that sealing the envelope is enough security, but others may want the additional protection. It's possible to send encrypted messages over SMS. It's possible to send encrypted messages in e-mail. It's possible to send strongly-encrypted messages over a channel that pretends to be encrypted but isn't because some authoritarian government has passed a law, and those messages will be as difficult to decrypt as the voice messages over Facebook Messenger currently are. Literally, it's pointless, and also a tremendous waste of time and money. So: stop. Spend the cash on effective investigative methods, and allow innocent people to go about their business without interference. So you want to edit/correct/clarify/retract that tweet?Éibhear, 2018-08-19 01:14:00 IST Some years ago I wrote a post proposing that Twitter could (should!) implement a retract button. I still think the proposal is valid and compelling. I recently listened to the great Techdirt podcast episode where Mike Masnick (@mmasnick) talks to Cathy Gellis (@cathygellis) and Parker Higgins (@xor) about "Old Tweets & Your Permanent Record"1, and it prompted me to think a little deeper about my initial proposal. The podcast discussion was inspired by Parker's2 post on how Twitter should allow users to hide their tweets (rather than deleting them). The context is that some famous Twitter users have recently got into trouble because their feeds have been searched for inappropriate tweets, which have then been used to cause them hassle in their professional lives ("She's a racist, the New York Times shouldn't hire her!" or "He promoted paedophilia, Disney should sack him!"). Without being able to cite specifics, I'm guessing that this context results in tweets being deleted. In some cases, I'm sure, whole accounts have been deleted. The result is messy: tweets having been removed from conversational or other contextual flows, links having been broken, conversations' continuities have been reduced, etc. Sometimes, it's more than just messy: as discussed in the podcast, people can delete whole bunches of tweets, and then regret it later because some were personally valuable, after which there's no going back3. I wonder if my proposal might help. Also, I've added some aspects to it that, perhaps, may make it more attractive. Correct, clarify, retract. Twitter4 should offer the ability to users to either correct, clarify or retract a tweet. Unfortunately, I'm not very good at mocking up screens, so I ask you to imagine what I describe. If you would like to volunteer some mock-ups, I'm at @firstname.lastname@example.org or @email@example.com and I would be very grateful for your help. In Twitter, an author's tweet comes with a menu (click on the shallow "v" at the top-right corner of the tweet) that allows you to perform a number of actions, including to delete the tweet. I suggest that there could be some other options, perhaps under a sub-menu with Edit as the name, offering Correct, Clarify and Retract as options. So, what will happen with these options? Upon selecting each of these, the author of the original tweet will be presented with the ability to reply to it, and will (if they wish, I guess) enter an explanation of what's happening. This could be a thread, if necessary. Once submitted, the interesting things happen: The original tweet will be updated in a prominent way with one of the following "The author has added a correction to this tweet." "The author has provided a clarification to this tweet" "The author has retracted this tweet" The explanation provided forever will be the first reply to the original tweet: users won't be able to say they couldn't find it. [Maybe: I haven't thought this one out fully] It won't be possible to reply to or retweet the original tweet. Where the original tweet was retweeted or embedded elsewhere, the fact that the tweet has been corrected, or clarified, or retracted will be made clear to the user. As mentioned in my previous post, all other users who engaged with the original tweet beforehand will receive a notification that this correction or clarification or retraction has happened. Some examples Correction Imagine some lumpen idiot who makes reference to @mmasnick in a tweet and then types out his name as "Mick", but doesn't noticing that until after it has been pointed out to him5. It would be nice to be able to add a correction to the tweet to say that it should have been "Mike" and for it to be prominent for anyone who comes along. ClarificationRemember that tweet you sent that made sense to you right up to the moment just after you submitted it, and then you started to doubt yourself? And then your doubts are confirmed by all the replies with "Wut?" and "Well, actually…"? To delete it, and then to send a clarification, presents the problem that all those replies are set afloat in a sea of noncontext, and then its hard for people to know what's happening. With my proposal, you could follow the original post with a clarification, which jumps out at you when you visit the original. Now other users understand the context of the full conversation. RetractionMy original point. Sometimes you say something that you really regret. Maybe you said something long in the past that now runs counter to your current opinion. The standard approach is to delete the tweet. I have never been convinced that this is the correct approach, however, especially for people who want to acknowledge that they were wrong with the original post. Having been in that situation myself, I understand the risk that a tweet that lives on could be taken out of context no matter how clear you are in your retraction. These tweets are the best fodder for those who want to destroy your employment or personal relationships, but to remove them completely is a regretful corruption of the historical record, and essentially denies it happened. But pretenting such a denial may very well back-fire. Why, though? I believe that deleting tweets is wrong. I won't try to convert anyone else to this view, but I would like help those who agree with me regarding what I say above to use alternative approaches – at least on a case-by-case basis. Essentially, to delete a tweet – especially one that others have engaged with – is not appropriate in all but a very small number of scenarios. People delete tweets to correct typos (Correction: this is the only reason I delete tweets from my personal account), to rephrase them (Clarification), to ensure that they are not used against them (Retraction) or because they are embarrassed by them (Deletion: in no way am I arguing that users shouldn't be permitted to delete tweets). For me only the last reason is compelling enough to delete a tweet, and even then I would recommend retraction instead. Have a read of The Intercept's A Note to Readers, in which it outlines how it found out that one of its staff members had been fabricating parts of stories he had written and were published on the site. Instead of removing the stories, The Intercept added "corrections and editor's notes" to them, even going so far as retracting one story in full. This, in my view, is the correct approach: the record remains intact but the follow-up that corrects or clarifies or retracts the original is prominently presented to all parties. In my scheme, Twitter will include the notice of correction, clarification or retraction with embeds and retweets (even historical retweets – a tweet from 5 years ago that was retracted last week should be obviously retracted to someone reviewing the retweeter's timeline now). Therefore, each reviewing user will be offered the chance to dig a tiny bit deeper into the context before deciding to rant, report, or call for a sacking. Yes, of course: screen grabs will always allow bad-faith actors to remove or deny context. However, if this proposal is implemented, then it will (should!) be clear that if there isn't a link to the original post accompanying screen grab, then the person presenting the screen grab may not have good-faith intentions. Finally, I think it's important that the users who have replied to the original, or retweeted it, receive a notification that the correction or clarification or retraction has happened. This offers those users the opportunity to pass information regarding that event on to their own followers, especially if they have tweeted something about the original that they feel needs a follow-up. The elusive edit-tweet feature Imagine the scenario: a seemingly nice person posts a tweet with "I love cats". As expected it gets loads of "likes"6. Then, some time later, that seemingly-nice person used Twitter's Edit Tweet feature to change it from "I love cats" to "I think fascism is great". Those loads of likes remain, and anyone viewing the list of people who liked the original post will see what they consider to be a list of fascists. This is why Twitter doesn't offer an Edit Tweet feature. However, many want to be able to make changes to tweets they have posted, and the only option to them is to delete it and re-post. My proposal offers a compelling alternative to pure editing of tweets, I believe. Footnotes: 1 Ampersand in the original. 2 There must be something up with me, and I don't think it's the new beer I'm drinking as I write this: I typed "Mr. Higgins'" and "Higgins'" and then decided that "Parker's" was probably the best approach as the others were too formal. I hope I'm right, and I wonder why I worry about these things 3 Parker does mention in his post that users can archive their tweets, and may be able to repost them somewhere else, but this could only ever be a part-solution, as conversations would still be broken 4 Yes. Yes. And the others, like GNU Social, Pleroma, Mastodon, etc. I will get to that in a subsequent post 5 His reply to my tweet about his podcast came in as I was typing this post 6 I still prefer "favourite", even though I don't even like that description With marginal knowledge comes marginal powerÉibhear, 2018-05-28 07:55:00 IST As a designer of back-end IT systems, I regard error management and error reporting as something to consider at the start, rather than at the end. Some years ago1, I designed a file-handing system, where we identified a little over 100 different error scenarios to manage. The system acted as a file-movement inferface between our internal system and an externally-hosted service. Between taking files off the end of one pipe and placing them onto another pipe, there was a need to perform rigourous integrity-checks on the files, and then specific transformations on the contents. Quite a number of potential points of failure arose, and if any one was to occur, we wanted to make sure we knew which one, so that the response from the support team would be appropriate. Management of error conditions was fairly easy. Depending on the error, we needed to know whether the user was to be informed (the user being the business admins of the service within the organisation, and the answer being "sometimes"), whether the support staff were to be informed ("always") and where to place the file (and its associated artefacts). Being clear to all concerned on what errors we were dealing with was the interesting challenge. As is normal, we devised a look-up file which would contain the following information on each potential error: A code uniquely identifying the error; A directive on what to do with the file when the error presented (record the error and continue processing the file, abort processing that file, abort processing of all files, etc.); A message describing the error, in english; A flag for whether to inform the user of the error; and A flag for whether to inform the IT Support department ("Y" in all the identified scenarios at this time, but potentially "N" for new, future errors to be managed). The code we developed would identify the error by its error code, and the look-up file would be used to determine the response. In the notification e-mail, the error is expressed using the human-language message, and not the internal error code. However, the log files recording the processing activity – for the sake of brevity – would record the error code, and not the english message. For what follows in this post, it's important to re-state the following: the notifications sent to the user and the IT Support department would not contain the error code, only the message. If, in designing the system, we have done our jobs properly, the e-mail notification should be sufficient to inform the reader of the specific error condition, and in the vast majority of cases, the appropriate response would naturally follow without further investigation. The population of log files was deemed prudent, though, in case something completely unexpected arises: but they should not need to be consulted except when all other options have been exhausted. As a help to those reviewing the log file, I determined that the error code itself should attempt to be readable. Therefore we devised a format that had two benefits: it would allow those reading the code to guess quickly what the nature of the error was, and also to allow for the easy addition of new codes should new error scenarios arise. The format is a little like: [IN|OUT]_<OBJ>_<ERROR> The first part says whether we're dealing with a file coming in to our internal system, or heading out from it; the second part says what the file type is (i.e. a payload file, or one of the accompanying integrity-affirming files); and the third the error has been triggered. Thus: IN_FILE_DECOMPRESSING tells us that there was an error decompressing the inbound file; IN_FILE_INVALID_SIG tells us that the cryptographic signature for the inbound file is invalid; OUT_FILE_LINE_COUNT declares that we could not determine the line-count of the outbound file we're processing and so on. Fast-forward to a few weeks before we go live, and we present our system-nearing-test-completion to some of the IT support staff. This is so that they are familiar with how the system was intended to work. Over a number of sessions, we presented the major features of the system, broken into the sessions on the business requirements implementation and the non-functional implementations. One of the latter sessions was on error handling. A comment during the session had me quite puzzled. One of the attendees decried the format of the error codes, claiming that there were too many elements to it. The expressed desire was that they didn't come with "simple" alpha-numeric codes that they could learn off. I hemmmmmed. I hawwwwed. I agreed that the comment was an interesting one. I also suggested that at this late stage in the project, going back to devise and implement such a scheme would be costly, and would introduce unacceptable risk to the project and its post-implementation support. But I would keep it in mind. And I did keep it in mind. I work hard to figure this one out for a long time. Eventually, I think I arrived at the core point. Consider this: Oracle DBAs are aware what an ORA-600 error2, or a ORA-12154 error3 are. Someone, somewhere (surely?!), knows what to do when the "Excel found unreadable content in…" error that MS Excel often throws up is presented. However, the users don't, and that's because they don't have access to the documentation that tells them what's going on. If our error codes were, in themselves, explicative as regards what the problem is or was, then the user (or, someone new to the support team!) might be overly empowered to resolve the matter him- or herself. Yes. So, we need the error subsystem to use an obscure coding so that those who respond often must learn the codes as part of their jobs as well as their meanings, but those who encounter them rarely must reference others for help. Also, wouldn't it be cool for two people to speak with each other using these arcane codes in front of a user not-so-familiar with them? I have a scheme that could be fool-proof: the ADICEC, pronounced "Adi-ssek". The "Arbitrarily-Devised, Intentionally Complex, Error Code" can be an attribute of the error, prepared purely for the IT department to maintain a separation from the user, building a false dependency between them. Here's how to build an ADICEC: It must be long. It must contain elements that are utterly pointless. For example: A client code (because you never know when the department head is going to attempt to "monetise" the system by selling it to others)4; An instance code (because you never know how many instances of this single-requirement-system there will be); A date-time stamp (because, we always want to know when these codes have been devised). Alternatively, you could use the change-tracker ticket id with which the error code was introduced… A product code (because … never mind) (Finally) a randomly-generated – but seemlingly-sequential – alphanumeric value, uniquely identifying the error, but bearing no reference at all to what the error is about. Now, don't be put off by this. We will continue to use a more accessible error code for exception handling in our source code. The ADICEC will be external to the source, and will only be used for the purpose of inflating a department's sense of importance. So much for K.I.S.S. Footnotes: 1 Many years ago! I now feel it's time to publish this story! 2 internal error 3 TNS alias look-up error 4 This was a system for a financial services organisation to meet it's specific requirements, and was never going to be sold as a product because of the uniqueness of those requirements. Voting with your conscienceÉibhear, 2018-05-21 08:38:00 IST The following is a reproduction of a tweet-storm I put up in October of 2017. I have edited it slightly to correct 1 typo, and to improve the flow. The phrase "voting with your conscience" is bullshit at the most basic level. It's intended to highlight how brave someone is by voting against a popular position due to some deep-seated moral concern with the consequences. It is particularly "heroic" when it's obvious that the vote will pass. Thing is, those who use the term do so to justify voting according to an instruction from some self-appointed authority who has no legal basis. "Voting with your conscience" sounds like a decision made after some deep, critical thinking, when it's almost always the direct opposite: it's voting with someone else's conscience. "Voting with conscience" justifies discrimination against women, the poor, the disabled, children, foreigners – particularly people of colour, and other minorities. "Voting with conscience" seeks to interfere with the private lives and personal relationships of people who seek little more than to live and love like everyone else. Unless your conscience tells you that we need to improve society to help those worse off than us; unless your conscience wants you to break the privileged authoritarianism of rich, white men; unless your conscience recognises that those who claim moral authority are sitting on a house of cards of their own making, you're more than likely voting without conscience. We're all supposed to vote with our conscience, though. That's what democracy's about. Isn't it? Proposal to link social media accounts to government-issued IDsÉibhear, 2018-04-18 20:34:00 IST I drafted and sent the following to the editor of the Irish Times on the 4th April, 2018. By today, the 18th, it hadn't been published, so I'm guessing it won't be. Here it is. If you're unfamiliar, the Public Services Card is an initiative of the Irish Government, and has been credibly accused of being a national ID card being introduced by stealth. The proposal I discuss below is another piece of evidence supporting that position. A Chara, Minister Jim Daly wants to tie our online identities to the Public Services Card. To progress this plan, he has written to the EU Commission asking for this to be EU policy, and has met with Facebook to get support for the measure. Leaving aside the clear violation of Article 8 of the EU Charter of Fundamental Rights, the proposal opens up a question that I rarely see discussed by those proposing such measures. Facebook, Twitter, Snapchat, etc. are not the only online social networking platforms. They're just the biggest. At the moment. There are, perhaps, thousands of such services around the world, each as easy to access as the other. For example, I run two similar services from a server in my home. Technically, they are not hard to set up, and as more and more come online, it can only become easier. Does Minister Daly's plan cover all social networking platforms? and has he a plan to meet with the operators of each? How does Minister Daly propose to require each service, no matter the country it is located in, to connect to the Public Services Card servers to validate a user's identity? Will the servers be able to manage such a load? Under such a regime, what controls will be in place to prevent bad actors from getting identity information on millions of Irish people, making it available for sale to the highest bidders? In the certain event that the vast majority of such social networking services will ignore Minister Daly's requirements, simply because they fall outside the country's jurisdiction, what measures has he prepared to deal with the large number of people from Ireland who will use those services rather than the services that will agree to perform surveillance for him? For as long as the internet has been in existence, there have been popular platforms that are the best at providing services, but they have never been the only such platforms. Over the same period of time we have seen many of these popular services supplanted by new-comers. Remember USENET and bulletin boards from the 1980s? Remember when Yahoo! and Jeeves were the best search engines? By forcing the current popular services to ignore Irish internet users' privacy, Mr. Daly will drive those users to other services out of his reach, ultimately destroying his plan. As we've seen recently, in the absence of such state-mandated spying, this is something that even the richest, and most technically savvy, commercial organisations struggle to balance. Mr. Daly, as with all other politicians seeking to control online behaviour, would do well to become familiar with the landscape and conduct proper research prior to making such proposals. Is mise, Éibhear Ó hAnluain, etc. Mention in today's Irishman's DiaryÉibhear, 2017-12-13 22:13:00 GMT It's probably too late for this posting to be of any value, but if you're here upon Frank McNally's suggestion in his Irishman's Diary column today, then the explanation he mentions is here. I have to say that I'm mad chuffed. Licence to copyÉibhear, 2017-10-20 22:24:00 IST Remember Arlo Guthrie's Alice's Restaurant Massacree? No? It's an near-19 minute song in which the protagonist tells the story of how he was arrested for littering. He and a friend brought a pile of rubbish to a dump, but it was closed for Thanksgiving, so they tossed it over a cliff. They were caught, convicted, fined and ordered to pick up the rubbish. Some time later, when he had been ordered to present himself for the draft (to the Vietnam War – remember that?), he did so, but attempted to get out of it, first by pretending to be unhealthy (he turned up hung-over), and then by pretending to be psychologically ill, resulting in him being praised. So he had to go through with the assessment. During an interview, he admitted having a conviction, so was required to fill out the form seeking the details. It turned out it disqualified him from joining the army, to "burn women, kids, houses and villages – after bein' a litterbug." The best part of the song for me has always been… … and everything was fine and I put down the pencil, and I turned over the piece of paper, and there, there on the other side, in the middle of the other side, away from everything else on the other side, in parentheses, capital letters, quotated, read the following words: ("KID, HAVE YOU REHABILITATED YOURSELF?") If you don't know it, or don't remember it, I think you really should give it a go. And, as Arlo himself says in the song, "but that's not what I came to tell you about." Every year in January we renew our television licence. Every year, I turn the page, and every year, there, there on that other side, in the middle of the other side, away from everything else of the other side, is the following: Kid! This licence does not authorise any infringement of copyright in the matter received. I laugh every time, and remember Alice's Restaurant Massacree. And then I laugh a little more at the thoughts that (a) someone thought it was necessary – or even a good idea – to put that message on the document, (b) someone thought it best to make it so sheepishly prominent and (c) someone thinks there is a concept of authorised infringement of copyright File names, and what you need to knowÉibhear, 2017-10-14 20:36:00 IST TL;DR If you work in any capacity with Microsoft Windows (especially if you're working in IT), you should always set Windows not to hide the extensions of filenames. Microsoft and Microsoft Windows has taught us some core facts about files over the years: All files have extensions to their names *.jpg, *.gif, *.xls, *.docx, etc. The extension of a filename tells you what the file is all about: *.jpg is an image file, *.gif is a short, funny, movie1 *.xls is an old-format Microsoft Excel file *.docx is a new-ish-format Microsoft Word file Because of facts 1. and 2. above, we don't need to see the filename extension when looking at lists of files, as we can trust Microsoft Windows' judgement as to what they are. All of these core facts are wrong. In the old DOS world the names of files were of the format <name>.<ext>, where <name> was limited to 8 alphanumeric characters and <ext> was limited to 3, and could not be omitted. This was just the DOS world, though. This format and the extension requirements were relaxed in Windows 95 (>20 years ago). It so happens, though, UNIX never forced file name formats like this. Nor did Apple's Macintosh operating systems. While extensions can be useful, it's not true to say that files require them nor, even, have them. The convention, for example, in the Linux world is for executable programs not to have filename extensions. If a filename has an extension, however, the only inference you can take from that is that the filename has an extension. Consider the following: A file was created in 2005 with Microsoft Word and was saved in the native format of that version, often identified as "Word 97-2003 Document (*.doc)". Let's say its name is AnnualAccountsReport2005.doc. In 2015, you upgrade the file to the latest Microsoft Word format by renaming it, changing the file's extension from .doc to .docx, resulting in it having the new name AnnualAccountsReport2005.docx. If we go by the rule that the extension tells us what's in the file, then this makes sense: we've changed the filename's extension, and because the extension tells us what's in the file, it must now be in the more modern Microsoft Word file format. Do it. What you'll see is that Microsoft Word will report an error when opening the renamed file, because the extension (the new one) doesn't match the format (the old one, which the file still uses as all you did was change the file's name). Now, because the first core fact is not true and the second core fact is not true, the third core fact isn't true either: hiding the extension of a filename is – it turns out – not a wise thing to do. Remember the love bug? I do. Also known as the "ILOVEYOU" computer virus, one of the reasons for its success is because of those core facts that I have just shown to be false. Most recipients of the e-mailed virus would have seen an attached file with the name LOVE-LETTER-FOR-YOU.txt, but its full name was LOVE-LETTER-FOR-YOU.txt.vbs. The .vbs was hidden because that's what Microsoft Windows does by default. By double-clicking on it, users thought it was a text file to be opened in something like NOTEPAD or WordPad, but what happened was that the file was opened as a runnable script, which then caused damage to the user's computer. Unfortunately, modern Linux desktop environments seem to work – unnecessarily – on this same model that the filename extension informs the system as to the file's contents, which then informs the system what tool to use to open it. Over the years I have encountered many software development issues that have been solved by prompting the developer to look inside the file rather than rely on the filename's extension. In fact, it's my firm opinion that all IT professionals should change their windowing environment's settings to force the showing of filename extensions. Footnotes: 1 No it's not, but I've no idea how this misunderstanding came about!