Archive for 'Smartcards / PKI'

In a typical Single Sign-On (SSO)/Federation scenario using SAML, the Service Provider (SP) initiates the user authentication request using SAML AuthnRequest assertion with an Identity Provider (IDP). The IDP authenticates the principal and returns a SAML AuthnStatement assertion response confirming the user authentication. If the user is successfully authenticated, the SP is required to have the subject’s profile attributes of the authenticated principal for making local authorization decisions. To obtain the subject’s profile attributes (ex. organization, email, role), the SP initiates a SAML AttributeQuery request with the target IDP.  The IDP returns a response SAML AttributeStatement assertion listing the name of the attributes and the associated values.  Using the subject’s profile attributes, the SP can perform authorization operations.

 

Ofcourse, it looks simple…here is the complexity – Last two weeks I spent on building a Proof-of-Concept that conforms to HSPD-12 Back-end Attribute Exchange specifications and SAMLv2 Attribute Sharing Profile for X.509 Authentication based systems (Both specifications are mandated as part of Federal Identity, Credential and Access Management (ICAM) initiative of Federal CIO Council).  I had been experimenting with an Identity Federation scenario that makes use of Smartcard/PKI credentials – Card Authentication Key (CAK)/X.509 Certificate on a PIV card authenticates a PKI provider (using OCSP) and then using its X.509 credential attributes (Subject DN) for looking up off-card user attributes from an IDP (that acts as an Attribute Authority). The IDP provides the user profile attribute information to the requesting SP. In simpler terms, the SP initiated X.509 authentication directly  via OCSP request/response with a Certificate Validation Authority (VA) of a Certificate Authority (CA). Upon successful authentication, the SP  initiates a SAML AttributeQuery to the IDP (which acts as an Attribute Authority), the SAML AttributeQuery uses the SubjectDN of the authenticated principal from the X.509 certificate and requests the IDP to provide the subject’s user profile attributes.

 

Using Fedlet for SAML X.509 Authentication based Attribute Sharing

 

SAML Attribute Exchange for X.509 based Authentication

 

Fedlet is a lightweight SAMLv2 based Service Provider (SP) implementation (currently part of Sun OpenSSO 8.x and sooner to be available in Oracle Identity Federation) for enabling SAMLv2 based Single Sign-On environment. In simpler terms, Fedlet allows an Identity Provider (IDP) to enable an SP that need not have federation implemented. The SP plugs in the Fedlet to a Java/.NET web application and then ready to initiate SAML v2 based SSO authentication, authorization and attribute exchanges.  A Fedlet installed and configured with a SP can set up to use multiple IDPs where select IDPs can acts as Attribute Authorities. In this case, the Fedlet need to update its configuration with the IDP Metadata configuration (such as entity ID, IDP Meta Alias, Attribute Authority Meta Alias – same as IDP ). In addition, the Fedlets are capable of performing XML signature verification and decryption of responses from the IDP must identify the alias of signing and encryption certificates.

Here is the quick documentation, which I referred  for putting together the solution using Fedlets for SAMLv2 Attribute Sharing for X.509 based authentication scenarios. In case, if you want your Service Provider to use OpenSSO for PIV/CAC based certificate authentication, you may refer to my earlier entry on Smartcard/PKI authentication based SSO (Using OpenSSO). Besides that you should be good to test-drive your excercise. Ofcourse, you can use Fedlets for Microsoft .NET service providers but it was’nt in my scope of work !

 

In case of SP requiring to fetch multiple user profile attributes you may also choose to use SPML based queries (SPML Lookup/Update/Batch Request/Response) to an Identity Manager (acting as Attribute Authority) – assuming it facilitates an SPML implementation). If you are looking for a solution that requires user profile attributes after a single-user X.509 authentication, then SAML Attribute query should help fetching a single user profile of an authenticated principal !
:-)

Onlinerel Facebook Twitter Myspace Friendfeed Technorati del.icio.us Digg Google Yahoo Buzz StumbleUpon

With increasing incidents of online frauds through username/password compromises and stolen/forged identity credentials - Strong authentication using multi-factor credentials is often considered as a  defensive solution for ensuring high-degree of identity assurance to accessing  Web applications. Adopting multi-factor credentials based authentication has also become a most common security requirement for enabling access control to critical online banking transactions and to safeguard online customer information  (Mandated by FFIEC authentication guidelines). One-time Passwords using Tokens, USB dongles, Java Smartcards/SIM cards, Mobile Phones and other specialized devices has become the most simplest and effective option that can be easily adopted as the “second-factor credential (Something I have)” for strong authentication solution.   Although…and there is a myriad ways to create one-time passwords, the overwhelming developer issue is to make it to work by readily integrating it with existing applications and further enabling them for use in Web SSO and Federation scenarios.

 

One-time Password (OTP) Authentication using OpenSSO

 

The One-time password (OTP) is commonly generated on a physical device such as a token and is entered by the user at the time of authentication, once used it cannot be reused which renders it useless to anyone that may have intercepted it during the authentication process.

Sun OpenSSO Enterprise 8.x offers a ready-to-use OTP based authentication module that allows to deliver One-time passwords via SMS (on Mobile phones) and Personal email or combination of both. OpenSSO implements Hashed Message Authentication Code (HMAC) based One-time password (HOTP) algorithm as defined in RFC 4226 - an IETF – OATH (Open Authentication) joint initiative. The HOTP is based on HMAC-SHA-1 algorithm - using an increasing 8-bit counter value and a static symmetric key that is known to the HOTP generator and validation service.  In a typical OpenSSO deployment, the HOTP authentication module is configured to work as part of an authentication chain that includes a first-factor authentication (ex. Username/Password authentication with LDAP, Datastore). This means that atleast one of the existing authentication must be performed successful before commencing HOTP authentication.

 

Try it yourself

To deploy OTP for Web SSO authentication, all you would need is to have OpenSSO Enterprise 8.x and configured up and running…. and then follow these steps:

  1. Login to OpenSSO Administrator console, select the “Access Control” tab, select your default “Realm”, select “Authentication”. Click on “Module Instances” and click on “New” to create a Module instance. Assign a name to the module instance (ex. HOTP) and select “HOTP” as type.
  2. Configure the HOTP authentication module properties.  You need to identify the values for Authentication Level, SMTP Server (Access credentials including host name, port, username, password), One-time password validity length (Maximun validity time valid since creation and before OTP expires), One-time Password length (6 or 8 digits), One-time Password Delivery (“SMS” or “Email” or “Both” to receive SMS and Email). 
    •  
      Configuring HOTP Authentication Module Properties

      Configuring HOTP Authentication Module Properties

       

  3. Configure an Authentication Chain that includes HOTP authentication module with any other authentication module (ex. Datastore, LDAP). You may note HOTP authentication cannot act as primary authentication since it HOTP authentication does not identify the user profile, so it must be combined with an authentication module that identifies the calling user identity. To create an authentication chain… goto the OpenSSO administrator console, select “Access Control”, Goto “Authentication Chaining”, click on “New”, assign a name to the authentication chain (ex. Two-factor”) and the choose “HOTP” module instance and select “Required”.
    •  
      Configuring the Two-factor authentication chain including HOTP

      Configuring the Two-factor authentication chain including HOTP

       

  4. Now the OpenSSO One-time Authentication Module is ready for use as par of “Two-factor” authentication chain.
  5. Create an User Profile that identifies the user’s “Telephone Number” attribute with the Mobile Phone Number appended with the SMS Gateway domain.
  6.  Test drive the configured One-time Password based SSO authentication, by accessing the URL of the configured “Two-factor” authentication chain as follows:
  7. As a result, you will be prompted to perform username/password authentication and then followed by HOTP. To deliver One-Time Password, click “Request OTP Code”, the One-time password will be delivered to your Mobile via SMS and also via email (provided in your User profile).
    • One-time Password based SSO

      One-time Password based SSO

    • As verified using my Blackberry…the OTP showed up as follows:    

  

Adopting to One-time Pasword based authentication credentials certainly helps to defend against many illegitimate access using compromised user credentials such as Passwords, PIN and Digital certificates.  Using OpenSSO based OTP authentication is just a no-brainer… try it for yourselves, I am sure you will enjoy !

Onlinerel Facebook Twitter Myspace Friendfeed Technorati del.icio.us Digg Google Yahoo Buzz StumbleUpon

Last few weeks, I have been pulled into an interesting gig for demonstrating security for _____  SOA/XML Web Services and Java EE applications…. so I had a chance to play with some untold security features of Solaris 10. KSSL is one of the unsung yet powerful security features of Solaris 10.  As the name identifies, KSSL is a Solaris Kernel Module that helps representing server-side SSL protocol to help offloading operations such as SSL/TLS based communication, SSL/TLS termination and reverse-proxying for enduser applications. KSSL takes advantage of Solaris Cryptographic Framework (SCF), to act as an SSL proxy server performing complete SSL handshake processing in the Solaris Kernel and also using the underlying hardware cryptographic providers (SSL accelerators, PKCS11 Keystores and HSMs) to enable SSL acceleration and supporting secure key storage.

Before I jump into how to use KSSL for offloading SSL operations, here is some compelling aspects you may want to know:

  1. Helps non-intrusively introduce an SSL proxy server for Web servers, Java EE application servers and also applications that does’nt implement SSL.
  2. KSSL proxy listens to all secured requests on the designated SSL port (ex. HTTPS://:443)  and renders a cleartext traffic via reverse proxy (ex. HTTP://:8080) port for the underlying Web or application server. All SSL operations including the SSL handshake and session state are performed asynchronously in the Solaris Kernel and without the knowledge of the target application server.
  3. KSSL automatically uses SCF for offloading operations to underlying hardware cryptographic providers with no extra effort needed.
  4. Manages all the SSL certificates independently supporting most standard formats (ex. PKCS12, PEM),  the key artifacts can be stored in a flatfile or a PKCS11 conformant keystore (If you are worried about loosing the private key).
  5. Supports the use Solaris zones, where each IP identified zone can be configured with a KSSL proxy
  6. Delivers 25% – 35% faster SSL performance in comparison with traditional SSL configurations of most popular Web servers and Java EE application servers.
  7. KSSL can be used to delegate Transport-layer security and the applications may choose to implement WS-Security mechanisms for message-layer security.

Those are some compelling aspects of KSSL that are hard to ignore…. if you really understand the pain from performance overheads associated with SSL/TLS :-)   As I verified, KSSL works well with most common Web servers and Java EE applications servers.

 

Try it yourself

Certainly it is worth a try…and you should able to do it very quickly than configuring SSL for your web sever !

 

  • Obtain your server SSL and CA certificates. If you just want to test-drive KSSL and considering to using a self-signed OpenSSL certificate.. just follow the example commands and make sure that your web server hostname is correct. If you choose to use a flatfile based SSL keystore, KSSL requires to have all your certificate artifacts (including private key and certificates) in a single file.  If you need more OpenSSL help, read my earlier post.

          Ex. To create a self-signed server certificate using OpenSSL (in PEM format).

    openssl req -x509 -nodes  -days 365 -subj
     "/C=US/ST=Massachusetts/L=Burlington/CN=myhostname"
    -newkey rsa:1024  -keyout myServerSSLkey.pem -out mySelfSSLcert.pem

           Ex.  Concatenate the server certificates in a single file.

    cat mySelfSSLcert.pem myServerSSLkey.pem > mySSLCert.pem
  • Configure the KSSL proxy service,  assuming the secured requests are forwarded to an SSL port (ex. 443) and the reverse-proxy of your backend Web server listens to a non-SSL port (ex. 8080). Use -f option to identify the certificate fomat, to represent PEM (-f pem) and to represent PKCS12 (-f pk12).  If the certificates are located in a HSM/PKCS11 Keystore, use -f pkcs11 to identify the token directory, -T to identify the token label and -C to identify the certificate_subject.

          Ex. To configure the KSSL proxy service with SSL Port 443 and reverse-proxy port is 8080 using PEM based certificates and the passphrase stored in file (ex. password_file).

           ksslcfg create -f pem -i mySSLCert.pem -x 8080 -p password_file webserver_hostname 443
  • Verify the KSSL proxy service under Solaris Service Management Framework (SMF) controls, the KSSL services is identified with FMRI svcs:/network/ssl/proxy.
                    svcs - a | grep "kssl"
  •  Assuming your webserver in the backend listens at port 8080, you should able to test the SSL configuration provided by the KSSL proxy.  Open your browser, goto https://webserver_host:443/ you should be prompted by the SSL dialog warning to accept a self-signed certificate.
  • More importantly, if your Solaris host is a Sun CMT server (based on UltraSPARC T1/T2 processor), KSSL automatically takes advantage of the cryptographic acceleration and no additional configuration is necessary.

Here is an unofficial benchmark that highlights performance comparisons with KSSL and other SSL options.  The following shows the latency of an Web application running on Oracle Weblogic server using different SSL configurations (Certificate using RSA 1024) on a Sun CMT server (T5440) – To interpret the graph, make a note “Smaller the Latency means Faster”.

 

Adopting to Sun CMT servers (based on UltraSPARC T1/T2 processors) helps delivering on-chip cryptographic acceleration for supporting SSL/TLS and its cryptographic functions. With KSSL based SSL deployment, you will atleast get an additional 30% performance advantage while comparing with other Web server based SSL deployments. I heard that Intel Nehalem EX processors are expected to provide similar on-chip crypto capabilities, not sure !  Either way, using KSSL is a no brainer and it works.  If you are itching the head to provide transport-layer security for your applications, this could be easiest way to go !  Ofcourse, it can help you score some points in those IT infrastructure security assessment checklists verifying for PCI-DSS, FISMA, HIPPA and/or similar regulatory/industry compliance mandates !  :-)

Onlinerel Facebook Twitter Myspace Friendfeed Technorati del.icio.us Digg Google Yahoo Buzz StumbleUpon

Looks like convergence projects are in the limelight… lately I noticed a lot of interests on enabling the use of common credentials for securely accessing physical and logical resources.  Although we find most convergence projects are targeted at the enterprise level but there are serious minds working on using smartcard based PKI credentials for supporting citizen-scale projects (I regret that I cannot discuss the specifics) !  Ofcourse the use of on-card PKI credentials and its on-demand verification with the PKI service provider is in practice for a while now at security sensitive organizations. The DoD CAC, PIV and most smartcard based National ID/eIDs contain PKI certificate credentials and few of them includes Biometric samples of the card holder as well. Using those on-card identity credentials for accessing physical and logical resources becomes critical and also makes sense to  fulfil the ultimate purpose of issuing smartcard based credentials… it cannot be overstated.

 

Couple of weeks ago, I had a chance to present and demonstrate PIV card credentials based logical access control using Sun IDM, OpenSSO Enterprise, WinXP running on Sun Ray environment. The demo was hosted  one of the Big5 SI.  If you curious to see my preso detailing the pieces of the puzzle…here you go:

Onlinerel Facebook Twitter Myspace Friendfeed Technorati del.icio.us Digg Google Yahoo Buzz StumbleUpon

Java Card technology has been a passion of mine for so long and I always tried my best to keep updated on Smart card technologies…… not just because of my role at Sun, I did get several opportunities to work closely with citizen-scale Java Card deployments with multiple National ID, eID/ICAO, US DoD/CAC, PIV/FIPS-201 cards and related Identity management projects.  It is always been quite adventurous everytime to experience a card issuance architecture and deployment scenario – right from applicant enrollment, demographic data provisioning, Biometrics/PKI credentialing, adjudication/background checks, post-issuance maintenance including card authentication/verification/usage and final retirement/termination.  In the early 2000′s, I even had an opportunity to write couple of Java Card applets for a big 5 financial organization using Java Card 2.x and it is still exists on production (No kidding! one of them may be in your wallet). With all those experiences, I did have my own stumbling issues with programming Smartcards, where I pulled my hair-out on understanding those evil ”Application Protocol Data Units” (APDU) based commands and responses. In my opinion, APDUs are quite complex to understand when you jump in unless you read the docs in-and-out beforehand and then test-driving APDUs are even more hard unless you have the luxury of having a debugging environment –  seriously, you may not want to experience those pains.  Havingsaid, now we can breathe a sigh of relief – I am bit late to experience the newer features of Java Card 3.0 -  It has introduced “network-centric” and “Java/J2EE developer” friendly features that radically changed the way we originally designed, developed, deployed, and integrated Smartcard applications.  Interestingly, there are very compelling aspects about Java Card 3.0 technology -  As I digged with my little experience… here is my observations.  

 

Understanding Java Card 3.0  

  1. A Smartcard can act as a ”Personal Web Application Server”  or an user-centric miniature Java EE application server on a network.  Java Card 3.0 has introduced a Servlet container environment referred to as “Connected Edition” – which allows the smartcard applications can built as Java servlets (Web applications) using Servlet 2.4 APIs and deployed as a “WAR” file to the Web container running on a Java Card 3.0 compliant Smart card. This Servlet based deployment is an addition to existing Java card applet deployment model referred to as Classic Edition (exists with Java card 2.2.x). The Java Card clients access the applications using a Web browser (ex. http://localhost:8019/myJavaCardServlet).   
    Java Card Platform - Architecture

    Java Card Platform - Architecture

  2. Java Card 3.0 supports 32-bit processor based Smartcards and handles more memory – upto 128k.
  3. Enough with pain of understanding/testing APDUs, now you can readily develop Java Servlet 2.4 API compliant Web applications and deploy them to a Smart card.
  4. With Java Card 3.0, we can perform interact with using standards based communication with the card using HTTP/HTTPS and also its supporting XML based protocols such as SOAP, REST etc.
  5. Support for Java crypto APIs and additionally you can enable access control with the card similiar to performing container-managed authentication in Java EE – using SSL/TLS mechanisms.     

    Java card 3.0 - Communication Protocols

    Java card 3.0 - Communication Protocols

     

     

  6. Java Card 3.0 based Web applications can be developed, debugged and deployed using Netbeans 6.7.1 and up.
  7. Smart card issuance (for Card holders) and updates using GCF can be done through Web based deployment model (via HTTP, TCP) – using both contact and contactless communication interfaces.
  8. Other features include full Java language support (Java 1.6 features) including all data types (except float and double), multi-threading, garbage collection, XML parsing/generation capabilities etc.
  9. Allows Java developers to explore Java Card platform easily with strong potential for deploying security applications intended for National ID card schemes, passports and simplifying deployment of  ”Match-to-card Biometrics”, “On-card” credential persistence and secure transaction based applications.

 

Try it yourself

If you are curious to test drive Java Card 3.0 reference implementation especially using its “Connected Edition” to deploy Java Servlet based application to Smart card - Before you begin, make sure you obtain the list of pre-requistes :

  1. Java Card Connected Development Kit 3.0.1
  2. Netbeans 6.7.1

and then proceed with the following steps for deploying a “Hello World” Web application – creating Java card applications can’t get easier than this :

  1. Install the Java Card 3.0 plugins for Netbeans 6.7.1 – Go to Tools, Plugins and search for card to select plugins for “Java Card Projects” and “Java Card Console”.  
    Installing Java Card plugins for Netbeans
    Installing Java Card plugins for Netbeans

     

  2.  Go to Netbeans IDE,  Choose Project – “Java Card” and select Projects type “Web Project”. 
    Creating a Java Card "Web Project"

    Creating a Java Card "Web project"

  3.  Assign Project name/location/folder and then select “Manage Platforms” to assign the Java Card 3.0 runtime environment.   

     

     

    Assigning "Java Card" runtime environment

    Assigning Java Card Runtime Info

     

     

  4.  To assign the Java Card runtime info, select “Manage Platforms” and choose “Platform type” to Java Card Platform.  
    Choosing "Java Card" runtime environment

    Choosing Java Card as runtime

  5.  Select the location of your ”Java Card 3.0 Connected Edition Dev kit” installation. 

      

     

    Select "Java Card 3.0 Connected Edition Dev Kit" folder

    Select "Java Card 3.0" Connected Edition

     

  6.  Define the default device (assuming your Smartcard) attributes and press “Finish”: 
    Select your "Java Card"

    Select your "Java Card"

     

  7.  As a result, you should see the Netbeans console showing your “Java Card Platform” environment for test-driving your applications.     
  8. With above steps complete, now you are ready to develop/debug/deploy your Java Card web applications…. here is my first “Hello World” Java Card Web application excercise.       
  9.  Compile the application -  In the Projects window, right-click the project node and choose Build to build the project.     
  10. To deploy and run the Web application from your target Smartcard device (in my case the JavaCard RI), In the Projects window, right-click the project node and choose Load/Create Instance or just Run to run the application.  Netbeans will launch the browser, displaying the Hello world application prompting for your name….  and push the button to see – what happens !    

Netbeans does all the magic for you – if something not working, no worries ! Like implementing anyother Web application in IDE,  it is now easy for you to painlessly debug and redeploy the application – I am sure, you’ll find deploying applications on Java Card is nolonger a mystery.

 

With Billions+ Java Cards already in use and so much demand for the Smartcard technology,  Java Card 3.0 promises beyond citizen IDs and can potentially act as your “Personal Web application server” on your wallet.

 

Thanks to Anki Nelaturu and Saqib Ahmad who introduced me to Java Card 3 with their JavaOne ’09 sessions. After playing with my first excercise on Java Card 3.0 RI, now I am chasing my friendly Smartcard vendors to loan me couple of Java Card 3.0 cards :-)

Onlinerel Facebook Twitter Myspace Friendfeed Technorati del.icio.us Digg Google Yahoo Buzz StumbleUpon

FIPS-140* compliance has gained overwhelming attention these days and it has become a mandatory requirement for several security sensitive applications (mostly in Government and Security solutions and recently with select finance industry solutions and particularly for achieving compliance with regulatory mandates such as PCI DSS, FISMA, HIPPA, etc ). FIPS-140 also helps defining security requirements for supporting integration with cryptographic hardware and software tokens.  Ensuring FIPS compliance to Java based application security has been one of demanding needs of security enthusiasts but unfortunately neither Sun JCE or JSSE is not yet FIPS-140 certified – hopefully soon !  Sun JDK 6 (and above) has also introduced several enhancements including support for enabling FIPS-140 compliance for Sun JSSE using FIPS-140 certified cryptographic providers for supporting SSL/TLS communication and associated cryptographic operations. To accomplish this, Java 6 uses the PKCS#11 support for JSSE to integrate with PKCS#11 based FIPS-140 cryptographic token.

 

Lately I worked on a security solution using SunJSSE with NSS as a software cryptographic token… and here is my tipsheet for those keen on playing FIPS conformance with SunJSSE.

 

  • SunJSSE can be configured to run on FIPS-140 compliant mode as long as it uses a FIPS-140 certified cryptographic hardware or software provider that implements all cryptographic algorithms required by JSSE  (ex. Network Security Services – NSS, Sun Cryptographic Accelerator 6000, nCipher, etc).

 

  • To enable FIPS mode, edit the file ${java.home}/lib/security/java.security and modify the line that lists com.sun.net.ssl.internal.ssl.Provider and associate the name of the FIPS-140 cryptographic provider (ex. SunPKCS11-NSS). The name of the provider is a string that concatenates the prefix SunPKCS11- with the name of the specified PKCS#11 provider in its configuration file.

                            security.provider.4=com.sun.net.ssl.internal.ssl.Provider SunPKCS11-NSS

 

  • In case of using NSS as cryptographic software token (Make use of NSS 3.1.1. or above), assuming the libraries are located under the /opt/nss/lib directory and its key database files  (with the suffix .db) are under the /opt/nss/fipsdb directory, the sample configuration for representing NSS will be as follows:
                           # Use NSS as a FIPS-140 compliant cryptographic token 
                           # SunPKCS11-NSS
                          name = NSS
                          nssLibraryDirectory = /opt/nss/lib
                          nssSecmodDirectory = /opt/nss/fipsdb
                          nssModule = fips
  • In FIPS mode, SunJSSE will perform SSL/TLS 1.0 based communication and cryptographic operations including symmetric and asymmetric encryption, signature generation and verification, message digests and message authentication codes, key generation and key derivation, random number generation, etc.
  • To refer to the SunJSSE supported Ciphersuites suites refer to the Sun JSSE’s documentation and notes for FIPS guidance.

 

* FIPS-140 is a US Federal data security standard approved by the National Institute of Standards and Technology (NIST) – The current version is FIPS-140-2. All US government agencies are mandated to use only FIPS-conformant/validated products for deploying security sensitive applications and solutions.
Onlinerel Facebook Twitter Myspace Friendfeed Technorati del.icio.us Digg Google Yahoo Buzz StumbleUpon

Lately, Biometric identification and authentication technologies gaining unprecedented importance in government organizations across the globe as evidenced in the US by introduction of HSPD-12, HSPD-24 and and other countries complying with ICAO requirements for biometric-enhanced machined readable traveller documents (MRTDs) / ePassports providing support for Facial/Fingerprint identification for travelers passing through airports, security-sensitive locations and ensuring protection against identity thefts.

I just came across this interesting prediction and analysis  by Matia Grossi, Frost & Sullivan’s industry analyst, – highlights:

  • Biometric technology adoption will triple by 2012 from its 2008 value.
  • Biometric technologies are getting increased attention in commercial markets particularly the financial, healthcare, retail and educational sectors.
  • Technologies currently gaining momentum include face recognition 2D/3D, Iris scans, Hand geometry, Vascular scans (palm vein scans), and Retina scans. Upcoming physiological technologies will be skinprints, earlobe scans, brain fingerprints, and DNA recognition.
  • By 2020, Multimodal biometrics using combination of fingerprint, Face, and Iris will emerge as the standard biometric identification solution for  government, border control and airport security applications.

I did’nt have a chance to read the complete report….all I read was the highlights of the report by Matia Grossi, Frost & Sullivan’s industry analyst…right here. If you are curious about using Biometric technologies for enabling Physical and Logical Access Control…read my earlier posts on Biometric SSO Authentication and Provisioning/De-provisioning Biometrics for Physical and Logical Access Control.

Onlinerel Facebook Twitter Myspace Friendfeed Technorati del.icio.us Digg Google Yahoo Buzz StumbleUpon

Important Disclaimer:The information presented in this weblog is provided “AS IS” with no warranties, and confers no rights. It solely represents our opinions. This weblog does not represent the thoughts, intentions, plans or strategies of our employers.
.