Showing posts with label security. Show all posts
Showing posts with label security. Show all posts

Wednesday, October 19, 2011

The three levels of authentication

Aside from the chance to get rich and famous, I wrote Designing Mobile Interfaces basically because I think it's a good thing to share knowledge. Much of the writing on this blog is the same way. But often, I forget when something is useful. Yesterday evening, someone copied me on a long email which was: a thing I wrote a couple months ago about some of the key IA principles behind authentication, authorization and so on. The deal is, I wrote it in like half an hour, off the top of my head. Because it's the tenth time I've had to write such a thing, since so many people do it wrong. So, this is a perfect example of something to sanitize, and share. Instead of overly-editing to be chatty, it's pretty requirements-like. Ask if something is confusing or too briefly defined. Or, tell me if you just straight-up disagree. These assume a website. Application authentication can be similar, but is not addressed here. There are three levels of authentication, not two:
  1. Anonymous
  2. Identified (or Recognized)
  3. Authenticated
And it is important that you never call the middle one "cookied," "zipped-in," "cart remembered" or anything either very specific, or based on the technology solution. The middle tier is as important as the other two. Each level of authentication has multiple characteristics:
  • Presentation Authorization – A certain set of information (general, specific, or personalized) is displayed on the screen or can be printed. There is no direct method to change this level (such as hidden content with a link).
  • Transactional Authorization – A certain set of transactions (general, access-controlling, or liability-inducing)
  • Management Authorization – Most users can only manage their profile. Some can manage multiple profiles. Some (e.g. customer care) can perform certain functions over multiple profiles. These are further subdivided into having degrees of Presentation or Transactional Authorization for each of the other profiles they can view and/or manage. Even if not in scope at the moment, you must consider it during the design of the system

Anonymous:

A customer visiting your site, who has never visited before or is visiting on a computer without a cookie (such as a friend's or at the library) will not be identified, and receives only general content. There are other schemes aside from cookies, but they are most common, so assume that for planning purposes. Links to various types of personal information or authenticated-customer services are displayed, and if selected will either:
  • Allow use, but will not save information. The kitchen planner today works like this, and intercepts for authentication on save or exit.
  • Inform the user they are only for customers, providing significant amounts of introductory content and a link to authenticate or register.
  • Intercept with an authentication dialogue, demanding credentials (or enticing to register) or allowing the request to be abandoned.
Depending on the type of information or capabilities of the service.

Identified:

The customer has visited the site before while authenticated, and the client computer can accept cookies. On this visit, Lowe's.com recognizes the cookie, matches it to a user profile, and presents complete customer information (minus the usual masking of values that should not be sent over the internet, such as passwords and credit card numbers). The customer can perform simple transactions that do not induce access control changes or liability, and are not too destructive (without recovery). They may not change access-controlling or liability-inducing information. They may have limits on changes to personal information. For example, items will be able to be added to or removed from the Home Profile, but deleting the whole Home may not be allowed. The following is a sample set of transactional features that are not available to the Identified session:
  • Purchases
  • Change password, or recovery information
  • Change or add mailing address
  • Change or add contact information
  • Change or add payment methods
The method of restricting transactional access to payment information and authentication credentials may be most seamless to the UX if view access is also not provided, but this is not inherent in the rules.

Authenticated:

The customer has provided a username and password combination which is successfully recognized by the system. There are two methods to provide this:
  • The customer entered the site as an anonymous user, and either deliberately signs on, or attempts to access an unavailable transaction (such as checking out with purchases). They enter both a username and passcode.
  • The customer has entered the site and been automatically identified. When attempting to access an unavailable transaction (such as checking out) they are intercepted, and provide their passcode only. The username is pre-populated (or not displayed, but a proxy such as the customer nickname and avatar are displayed) during the request for additional credentials.
This may have happened due to an earlier transaction, and the session may continue to be active. Authentication does not have to be directly related to the service requiring this level of access control. The customer then has complete authorization to view and transact with all portions of their profile as long as the session does not expire.

Access-controlling:

Any feature that, if maliciously modified within one session, could prevent access by the owner, and/or redirect information about access away from the owner. Changing of credentials or contact information, basically.

Liability-inducing:

Any feature that, if maliciously modified, could result in unauthorized financial transactions, or one that includes information that could lead to valid transactions being intercepted or otherwise repurposed.

Cross-Channel:

The same identifiers should be used in all channels. All websites, all applications, in store, at home. Rules may vary between the access points. For example, the mobile application (and website) should (probably) sign on with the same credentials, then retain the customer entirely authenticated, relying on the device's security.

Identification Management:

A cookie or similar key is placed on the customer's client machine. This is checked at each authenticated visit, and if no cookie is found a new one is created. No special first-use case must be developed. Since this key is on the client, and sent in plaintext, it must be secure. Do not place any intelligence in the key. It should not be the customer's name, location, customer ID, Keyfob ID, MEID, time created, store, card number, SSN or any other characteristic. It should not even be a random or encoded value using any customer-identifying information as the seed. All too often, these are cracked, with bad PR if you are lucky. It should be an entirely arbitrary key value, and nothing else. Make this key value as short as possible, to reduce bandwidth. Remember to not pad values for expected total sizes of the data store. The first users to register can have a key with the minimum secure/intepretable length, and then additional characters are added as the service grows. The key is sent to an authentication management service when the customer enters the site, and is looked up in a table to associate the key to a customer. That customer then has that authorization level.

User Management of Identification:

It is usually best to simply set the cookie for all users. Those who do not like them will disable cookies across the board, or for our domain. If there are concerns, then a method can be provided for the user to manage this:
  • A checkbox on the authentication dialogue (set to saving the cookie by default).
  • A setting in the user's profile.
  • Help documentation should be provided with as much information as possible, and links to privacy and security policies
Note that explicit signout is another way for customers to "Manage" their cookies, by clearing them at each session. However, this is a manual, one-time process.

Timeout:

Identified sessions do not time out in a manner visible to the customer. They may continue using the site in the Identified state forever. When an Authenticated customer session times out, it seamlessly falls back to the Identified state. There is generally no need for notices, warnings or the presentation of the authentication dialogue. When the user next needs an Authentication-required feature, they transact this as usual, from the Identified state. Due to the seamless switching, this can increase security and be beneficial to server balancing. Sessions can be much shorter, as short as 5-10 minutes, without serious problems. Some cases may need to have special exemptions. Checkout, for example, may need to be extended or exempted if the process routinely takes customers more than the otherwise-recommended timeout period.

Explicit Signout:

When the customer explicitly signs off, they will return to the Anonymous state. This state should be persistent, and the cookie cleared.

Failed Authentication:

If an already-identified customer fails to correctly enter their second credential (the passcode) enough times, they may either be locked out entirely, or revert back to the Identified state, and only be locked out of re-authenticating. This decision will have to be made by the team, and Security. Any lockout should send a message out of channel to alert the customer of a possible attempt to break into their profile. Usually, this will just be an email, but the system should be built so it can send to SMS or use other channels as those notification capabilities are built.

Wednesday, August 17, 2011

Admissibility & Mobile Networks: Security is a Lot More Than Passwords

I am writing this because of the recent attention to mobile network "security" in the wake of the several phone "hacking" scandals. It probably should have been one of those topics in the appendix of the Designing Mobile Interfaces book, but it's been delivered and O'Reilly is actually asking us to cut the appendix down so the thing is only good for holding doors open, instead of anchoring boats. For now, a blog post will do. Before we get into the details, some basics of security. First of all, security is not about preventing access, because straight-up, 100% prevention of pretty much anything is impossible. Any analogy will do, so take fire safety. You cannot build a fire-proof house. But you follow building codes, put in proper-sized breakers, replace worn power cords, don't keep piles of gasoline-soaked rags in the corner, put the family photos in a firesafe, and everyone is aware of where the exits are. You install smoke detectors. If you don't live in a reasonably dense area, you may want to have a remote alarm system. Together, these provide:
  • Reduction of risk - Do what is reasonable to prevent things catching on fire.
  • Reduction of exposure - If a fire starts, it should not spread too quickly; key systems are hardened or have backups and people can escape.
  • Reduced reaction time to incidents - You or your neighbors notice the screeching and smoke, or the fire department gets notified automatically, so arrives to stop the fire before it is too severe.
This is not just an analogy but is formally considered like this for fire safety, and things like it. The burn-through time for a wall, or fire door is considered in the context of the structure, it's risk of fire, the type of fire, the response of the fire department, etc. Similar actions occur for physical security. Bank vaults are shiny to impress customers but really are only so strong, as they have alarms, guards, are often designed to be seen from the street, and drilling or cutting will be heard. All you have to do is make sure the vault is hard enough to break into that someone will notice, and can react quickly enough to stop the bad guys. Similar things for technical security, for the same reasons. Cost, and ease of day-to-day use. Reasonable locks, storing things in ways that cannot be exploited once the front door is broken (don't store passwords in plaintext), and baking in ways to be aware of breaches so you can fix them. A common one that I like is the out-of-channel notification. You make a liability-inducing (money changes hand) transaction, or access control (change your password) change via the website,and an email is sent out. The website is more likely to have been breached from several thousand miles away, and they probably don't have access to your email account, so you have a chance to notice, and react before much damage is done.
Moving closer to the topic, as I talked about several years ago there's a standard four tier security model that's been used for years and years:
  • Authentication: Who are you?
  • Authorization: What are you allowed to do?
  • Availability: Is the data accessible?
  • Authenticity: Is the data intact?
Which is very nice, and breaks things down well. You can already see that even conflating authentication and authorization is wrong and bad (we won't go into why, but if you are: look it up), and there's more it. That's the key to this post, btw. Security is more than just turning on the password by default, or reducing the session timeout so you enter the (you guessed it) password again. Anyway, Dave Piscitello (an ICANN fellow, security consultant, and generally clever guy) asserts we need a fifth layer. I agree completely, though not everyone does and accepting it at face value has pitfalls, but let's not talk about that yet. Therefore, the kind of thing I do is print this, and stick it on the wall of my cubicle so I don't forget it:
  • Admissibility: Is the host device/channel valid and safe?
  • Authentication: Who are you?
  • Authorization: What are you allowed to do?
  • Availability: Is the data accessible?
  • Authenticity: Is the data intact?
For the most part, admissibility (well, good admissibility) means SSL or something similar. But mobile networks trump everything else. If you are a mobile operator, and the customer is using service on your network (not roaming, etc.) then you (more or less) own the whole experience. Ignore privacy concerns. Say that you are a mobile operator, and the customer wants to check their minutes. They go to your website via their phone. As far as the customer is concerned, full internet connection, go to the world and check a website. But it's not. In fact, your network has to hand off to the internet at some point, and there's a lot of good reasons to check the DNS requests, and send them down different pipes. When a request for your webserver comes in, you just send it across the hall to that box. So far, all that you get from this is reduced network traffic, and reduced latency to the user. But what else you can do is realize that you own the packet from the webserver to the pixels on the screen. You can, for example, identify the user and show basic information without authentication. No cookies required, first time they go there, the MEID (or other handset identifier) is used as the only key (authentication) to give this information (authorization). Sure, sure, someone can gain physical control of the customer's device, maybe temporarily, so you might want to ask for a passcode for liability-inducing actions, like purchases or password changes. But for lower-priority items, for non-sensitive but personalized data, or when out-of-channel notifications can reduce the risk, identification by network and reasonable degrees of authorization are a good and useful thing.
So, let's bring it back to the security of voicemail systems. Voicemail is one of those that's provided by you, the mobile network operator. Since you own the whole chain, front-to-back, a pretty reasonable solution is to require a password (a good one, but that's a different article) for access from the home phone, while roaming or whatever, but just let the user in when they are dialing from inside your network. Exposure seems minimal. It's voicemail, so is a bit cumbersome to use. You have to listen to it, it takes a certain amount of time to do this, or figure out the prompts to forward it, and there's a tiny bit of a trail. If your boyfriend takes a very long shower so you have time to listen to his voicemail, he's still maybe going to notice the MWI (message waiting indicator, indicating new voicemail) has been cleared, and there's no good way to reset it. I see no serious pitfalls here, for /reasonable/ security. Remember, be reasonable. The service has to be easy to use, and most people provide decent physical access control of their devices. I am so confident in this assertion that I'll even admit I have been part of the decisionmaking for an operator to keep doing exactly this.
But we screwed up. No, not in retrospect, or because we mis-weighed the severity of getting into vmail systems. Nope, much more basic. It was an admissibility problem. See, voicemail systems are provided by third parties. And not the same third party that installs the rest of the mobile switchgear. Don't ask why (because I don't know) but they are. Always, as far as I know, but I could be wrong. In at least some cases, maybe most and maybe all, they are also not particularly well integrated. To the point where they might not be housed at the giant corporate data center. And how do they determine when the call comes from inside the network? Well, at least sometimes, by caller ID. I'll let that soak in for anyone who at all is sad they missed the phone phreaking days. Yes, it's trivially easy to spoof caller ID... There, I just took 90 seconds out of writing this, googled some terms, and sent myself an SMS via a spoofing service (cause it was free; I would have had to pay to use the voice calling services). Anyway, huge breakdown in the security chain. The only lesson being: keep asking questions, and get to the people who know how systems really work. Because using your network the right way is among the more neat things that mobiles can do. If you are just some other web or app provider, you can even get agreements with the operators (or others who broker such deals for you) to get some of this same behavior out of your service. The lesson is not: more and longer passwords, as those are eminently crackable, and are pretty unfriendly to enter (more so on mobiles). Use the right security, but use it the right way.

Tuesday, September 7, 2010

Why can't you use the cool stuff you already have?

“  ...why can't work keep up? Why are you forced to use an unfamiliar, and sometimes outdated, operating system? Why do you need a second laptop, maybe an older and clunkier one? Why do you need a second cell phone with a new interface, or a BlackBerry, when your phone already does e-mail? Or a second BlackBerry tied to corporate e-mail? Why can't you use the cool stuff you already have?...

...security is on the losing end of this argument, and the sooner it realizes that, the better.  ”

Tuesday, February 5, 2008

Computer chips will - as always - solve all of our problems

Check out this article on GPS nav units, and other electronics, being stolen from cars as they are left installed, in plain sight. How to fix this? Easy!
GPS manufacturers could solve the theft problem easily by installing computer chips to track them — but of course, that would make the devices cost more.
Um... a "computer chip"? GPS is a receiver. A chip does not have a radio. How would the data get back to... anywhere. Of course it costs more when its got a transmitter inside it. And how does it know its been stolen anyway? There is mention of registration of devices, I suppose. Do they want to allow on signed devices to work on the network (like US DBS services)? Because that's not how GPS is set up, at all. Anyway, this is not the first time I have heard such stuff, and it won't be the last.

Thursday, January 31, 2008

What makes them lie?

Am I the only one who actually travels across the U.S./Canada border? Anyone seen the press releases and reporting today. This one is typical:
Buffalo, NY (WBEN) - It used to be that when customs officers at the border crossing into the U.S. asked "Where were you born?", they'd take your answer at face value. New rules, in effect Thursday, require you to prove it. Americans and Canadians over age 18 will be required to show some document beyond a driver's license to prove citizenship, such as a passport or a birth certificate. In the past, some people entering the U.S. from Canada or Mexico simply had to declare their nationality...
This is a straight-up lie. The wife and I have crossed the border a few dozen times since 9/11. Guess, what? They don't trust you at ALL. They used to, but it was just trust. Technically the rules said you needed to prove it, so all of a sudden you better have a birth certificate or whatever else they feel like. My first encounter with this was troublesome as it was a surprise; I got directed to a scary room with lots of stern looks, and only got in by lying. "I think they only draft citizens" as I showed the Selective Service card. I know this to be untrue, but it got me out of the room. To be clear, the Canadians are no better. They arbitrarily (i.e. suddenly enforced long-standing rules without telling us) tightened these restrictions at the same time. So, we've been traveling with passports every time for the past 6 years.

Monday, January 14, 2008

Sneakernet

Anyone remember this phrase? It seems to have fallen out of favor lately. For those too young or not nerdy enough, its the practice of physically transporting storage media. Originally floppies, but I spent a lot of time driving SyQuest platters and other stuff around town. It was really the only way to do things when computer networks were poor, not well-interconnected, or nonexistent. A lot of time was spent walking (and mailing) disks about when I installed PhoneNet network at my first job. It was revolutionary, but still too slow for most file transfer. I was just thinking that sneakernet seems to be rather back in vogue now. I get or give flash drive data constantly. All sorts of folks travelling or serving overseas send camera or other media back home. Packet sizes have gone up (my free promotional thumbdrive is 2GB, or 46 times larger than my shockingly-large-at-the-time SqQuest) though speed is the same. But unlike the old days, no one really seems to be considering it as a formal transport method. And that leads to the real issue. By being an ad hoc network, little or no attention is paid to efficiency, load or the security of the network. For example, I have no idea where my thumbdrive is right this minute. And at least weekly there's a scandal where a government agency, bank or someone else looses a disk. Media these days is not routinely secured, data is not routinely encrypted, and nothing inserted into my computer is scanned to make sure its safe, all of which was pretty routine for transported physical media 20 years ago. I worked at a place in the early 90s that transported 9 track tape, among other things. With medical and financial records of people. I believe the data was encrypted in some manner (not sure, but I know we had to bring it into the host computer and process it before use, so I think so), it was transported in a big, red plastic case with a padlock (also kept it safer from EMI issues) and was moved by a medical courier service, with a well-tracked log and guarantees as to time-to-deliver. This procedure could not be violated; one time the tape wasn't ready when the courier left, and we couldn't just send some random employee out to deliver it. Why, despite our other paranoia, have we all become so complacent now, I wonder?

Tuesday, November 20, 2007

Mask the /secret/ part of the information

The article Best practices to redact account numbers from today's RISKs 24.91 brings up some good points, and one I have failed to execute correctly myself. The general gist is that redacting or masking of personal information should be done carefully to reduce the risk of a bad person hacking the rest of it. Credit cards are well done, only revealing the last 4 digits (to tell which card you used). While some of the masked data is easily guessable, like bank ID, enough remains as an unknown, individualized string its safe. Where I for one had failed was in understanding the method by which SSNs are encoded. A lot of organizations reveal the last 4 of the SSN for identifying purposes. But aside from other errors of revealing, the first 6 are reasonably crackable as they encode the issuing location and date; revealing these will generally work well to distinguish individuals, while hiding the individualized information. Similar issues arise with other strings, like bank accounts. So, analyze your data before figuring out how to use it best, and most safely.

Thursday, August 30, 2007

Warning! Cool new feature!

Despite their best attempts to hide this sign on the filthy wall behind the ATM (is that a spider above it?), I noticed this the other day at the local Hy-Vee. Its pretty lame, but I unfortunately know what they are talking about. The Sprint campus had one of these, and its communicated even more poorly. Check it out: Danger, envelopes are prohibited. You jerk. Now, I know what they mean and why they are doing it. There's a not very new law that everyone is finally complying with to save money. Your checks are basically destroyed at the receiving bank; a scan (and a file of meta-data) flow electronically across the countryside to move the funds about appropriately. Scanning at the ATM probably means they can skip several steps, and just toss your paper check in the shredder. You do the scanning work for them, instead of the box of checks and envelopes going to a fulfillment center where they are opened, scanned and so on. Speeds payment also, which is always supposed to be good. Though I always like the delay in fund removal that giving someone a check affords me. In practice, to the end user, this is pretty cool, and you get to see your check on the screen before confirming everything, so its pretty clear the system worked correctly. But, as you can tell, I have problems with the way at least Commerce did this. I suspect most other banks did just as poorly, assisted with obtuse ATM designers.
  1. As you can see above, the communications is awful. Take a feature, and turn it into a constraint. It practically reverses the standard joke to "its not a feature, its a bug." I am not a communications designer, but I am sure there's a way to communicate this technical requirement, without it being a prohibition on user behavior. Sure, it says please, but aside from the warning-like graphic, there's not a positive word on there; nothing about the new feature and just insert the checks alone. Its not even that close to the slot. Anyway, there are no envelopes, and I don't think they would even fit in the slot, so its not a huge risk.
  2. Speaking of no envelopes, why not? No, I know its supposed to scan the check, but placing deposits in an envelope has been going on for...ever. Before ATMs, you were given little envelopes (or similar devices) for cash or check deposits thru the pneumatic tubes or power drawers at bank drive thrus. Almost anyone who has banked at all will be used to this system. So, why not a transition period? Provide envelopes, but encourage users to try the new envelopeless service. You have a full-color interactive system, so nifty interstitials or banners (better) can be loaded into the process. After 6 months, remove the envelope rack, and after a year remove the capability of accepting them at all.
  3. And this envelope habituation brings up another real-world issue I, for one have. I never endorse checks going into ATMs. Its not required by law (I am very sure) and doesn't seem to be an issue in ATMs (never had one rejected). Also, I forget. I don't carefully prepare for my trek to the ATM, and they NEVER have pens on site. So, by the time I am there, its impossible to sign them. No, I am not a girl, so I don't have a purse with pens. Okay, see any pens on that Sprint location ATM? Neither do I. Yet, they seem to be scanning for endorsement, and get mad if its not present. This is just poor design. Provide a pen, or two for when it gets stolen. Provide a bucket of them like envelopes, and don't worry about the loss rate as its free advertising. Or, don't require it, and if some new anti-terror legislation does indeed require it, provide a way to authenticate in some other manner.

Tuesday, August 21, 2007

Michael Geist shows me the true meaning of Web 2.0

Social networking sites are often the very definition of web 2.0. But an editorial this morning describes many of these services as walled gardens, and calls to open them up. And now that I think about it, I agree. As usual, I think I always have agreed, and just didn't know it. Of course all information should be free, and so on. However, in the real world, there are other problems. Disregarding technical difficulties, I see two:
  1. Monetizing it: An excessively open structure is hard to make money off. How is twitter gonna make anyone rich? Okay. Skype, Flickr, etc. do have a plan, and are (for as much as I follow things) possibly going to profitable with their businesses for the long haul. This model won't work for everyone, of course, so it takes a forward-thinking company to just be nice enough to start opening their data. It took some effort to just get Sprint to move all their help info in front of signon (and people are still suspicious of it). Feeds and tagging and other "2.0" features are a step beyond, and "to be nice to customers" or "its cool" will not sell them to the guys with the checkbook.
  2. Legal restrictions: Or at least, security restrictions. I've dealt with scads of them, but how about Netflix? They recently discussed why they cannot disclose your rental info. Its a law, and is what has caused the extra burden of customer-acting disclosure, and limits on how far their social network technology can go. Similar concerns (or, if we're unlucky, laws) will dog the pure social network services, especially when we think of the chidren. Sure, its mostly theater and hype, but there are some risks to just releasing all your info for search, or getting RSS feeds, or allowing any schmo to build a plugin.
Like everything, it seems we've gone too far, and not far enough all at the same time. So, how to do it on a legacy product?
  1. Pray you are not in a highly regulated industry.
  2. Okay, even then its not a huge deal. Customers can release information, delegate others, etc. Within limits, but its possible. Everyone needs to get used to this sort of security what with increasing fear or the increased maturity of the internet (take your pick)
  3. Find a value proposition. A direct one is best, some way to either sell the service, sell access to other services as a result, or find some part of it to be sold as a premium service.
  4. Improve your brand. If there is no direct way to monetize the new features, sell the improvements in trust, stickyness and interaction. Some marketing guys will object to missing their goals for page views in their current model, so you will need to work around that to make sure other sorts of interactions count, and get them the traffic they need to support your case.
P.S. The lists above are supposed to be numbered. The are showing as bullet lists to me though, so just pretend. Or tell me if they work for you. Bah, blogger!

Thursday, August 2, 2007

Costs of Data Retention

Does anyone remember the good old days, when developers were called "programmers" and used to pride themselves on the quality and brevity of their code? There's a distinct dark side to capacity, speed and data being increasingly free at any small increment; the cost is invisible to the individual programmer, so its disregarded. We all know about bloatware, and its impact on the speed of your computer. Less reported, but increasingly clear to me are the costs of network traffic. The internet, your LAN and your phone network is faster all the time, but its not unlimited, especially as transaction counts rise due to more users, more devices per user and more bloated software wanting to know more about you. But what about the cost of data retention? Now that space, even on the smallest device, seems free, too much is retained, and far too much information that should be considered secret. Yes, there's retries, preserving for the session, etc., but as a result of this, data is being retained forever. An ID badge I use every day has the SSN sitting there in plaintext. Why? Okay, so isn't that secret anymore. How about my credit card? The industry has some solid metrics and standards that were supposed to be implemented in 2005. How are they doing? Visa has been running a series of polls. Remember, this is self-reported, not an audit by Visa. It could be much worse than this. eWeek: Retailers not exactly where visa wants them to be
But given that Visa has said that there are 1,057 retailers in that group (327 Level 1 U.S. retailers and 730 Level 2 retailers), that four percent suggests that about 42 major retail chains aren't even claiming that they've stopped retaining that data. Visa estimates that the 96 percent relates roughly equally to both groups, suggesting about 13 retailers in the Level 1 group (with the very largest retailers) and about 29 in the Level 2 group.
This is mostly things like retailers retaining the entire contents of the magstripe, forever. I can only imagine why, and I'm hoping the more I write the more I get towards and understanding of how good requirements become bad specifications, and business rules just get lost.

Tuesday, July 24, 2007

What should be secret?

BBC tech commentator Bill Thomspon wrote on Monday about, well, several things. Ostensibly, about security risks on the iPhone, but it rapidly got into much more interesting core issues.
The problem is apparently that we are all giving away too much information that should remain secret, like our date of birth, address and even details of which schools we have attended or where we have worked. This information should apparently be carefully protected because criminals can use it to fill in applications for credit cards or loans, stealing our identities and causing all sorts of problems. This seems to be entirely the wrong way around. I have never kept my birthday secret from my friends, partly because I like to get cards and presents, and I do not see why I should have to keep it secret from my online friends. If that means that other people can find out about it then the systems that assume my date of birth is somehow 'secret' need to adapt, not me.
I couldn't agree more. The fact that I cannot say this well is why I don't have a technology column read by people in other countries. I think about this a lot due to the new FCC regulations I spend a lot of my time working on these days. The FCC seems to get it. We are not supposed to use that sort of personal information for any security. At all. Not even for recovery or anything. Security uses unique passwords, and other such stuff. Well, actually there is a "shared secret" recovery or bypass as well, but I think its just because no one could come up with anything better; eventually, maybe we'll loose that also. Anyway, in theory this leaves us capable of thinking about interesting uses of that personal information. The FCC burdens us with calling practically anything personal CPNI, but it doesn't mean the customer can't reveal it themselves if they want to. Well, they can't really now, but perhaps some future, web 2.0 version of our site will let users publish Picture Mail, or Game Lobby or other community-related info thru RSS feeds. Think of how cool, and useful, a mashup of mobile photography, mobile location and public mapping tools could be. I also respect Bill for publishing his school, his mother's maiden name, his birthday, and so on. I cannot tell you how many people tell me, as though its bad, that my home address is on my website. Yeah, on purpose. I just added my GPS coordinates, among other things. I'm not even sure I'd bother hiding this stuff even if I was a true celebrity. The same security principles hold, so door locks should comply with the proximate risk. As a stalker-worthy person, I'd just upgrade the locks.

Tuesday, July 17, 2007

Admissability

Most everyone associated with security is familiar with the standard four layer security model. Well, in August of last year Bruce Schneier published a brief update of this on his blog. Dave Piscitello proposed adding another layer. I liked it so much that I typed out the whole list and stuck it on my cube wall so I wouldn't forget it.
  • Admissibility: Is the host device/channel valid and safe?
  • Authentication: Who are you?
  • Authorization: What are you allowed to do?
  • Availability: Is the data accessible?
  • Authenticity: Is the data intact?
Although its on my wall, staring at me for the last year, I hadn't made any connections to it yet. I was thinking of hard wired shielded networks with proprietary connections. And there is plenty of argument about the validity; if its on an open network, and you rely on the machine to tell you everything is fine, that can be spoofed as well. The end result is I was not sure how it would ever apply to me until I looked at it this morning. I spend a lot of my time now designing UI and specifying behaviors for a bundle of fairly restrictive FCC regulations coming online on 12/8. If you don't work for a telecom, this is the fallout from that pretexting stuff last year. One of the components is notification of...everything. Changes, possible spoofing, password resets, etc. It all goes to your contact addresses on file. You get an email, text message, letter or whatever. And for extra security, you have to wait 30 days before a new address can be used (till then, all comms go to the old address). Presumably this gives time to correct an improper reset action by someone hacking in. But the 30 day rule is exempted for wireless devices. Specifically (I have presumed and specified; as its not super-well written) our wireless devices. See, we own the network, head to toe. We have control over the device's access to the network. If you report it stolen, its functionally disabled, immediately. And so on. So, this is a great example of device and network admissability in practice.