Published in issue of Chip Design Magazine

The term *cryptography* (which is derived from the Greek *kryptos*, meaning "hidden" and *grafo*, meaning "write") refers to the science of coding (encrypting) and decoding (decrypting) messages and/or data so as to keep their contents secure.

In addition to well-known applications such as online financial transactions, cryptography is increasingly being used to protect a vide variety of data, from the intellectual property (IP) used in electronic designs to personal documents on home computers.

Until recently, the most widely supported public key encryption scheme was RSA, but now there's a new kid on the block:*Elliptic Curve Cryptography (ECC)*. Endorsed by the National Security Agency (NSA), the National Institute of Standards and Technology (NIST), the American National Standards Institute (ANSI), the Institute of Electrical and Electronic Engineers (IEEE), and the Internet Engineering Task Force (IETF), ECC is set to become the next-generation public key cryptosystem.

The reason for the excitement surrounding ECC it that it requires much smaller keys than RSA to provide the equivalent security; also, ECC is extremely computationally efficient providing savings in terms of time, memory, bandwidth, and energy consumption.

**Cryptography 101**

In order to understand how ECC fits into the picture, it's worth taking a little time to remind ourselves as to the various cryptographic techniques in widespread use. In order to encrypt a source file, the encryption algorithm uses a special number known as the*key*. The value of this key modifies the detailed operation of the algorithm; that is, the way the contents of the original file will be "scrambled up." This means that if the same file is encrypted using two different keys, the results will be totally dissimilar.

Not surprisingly, algorithms and keys are created in such a way as to make "cracking the code" by unauthorized parties as difficult as possible. The point is that, in order to open the encrypted file and access its data, the end user requires access to an appropriate key.

*Symmetric Encryption:* Historically, encryptions algorithms have primarily been of a type known as *symmetric*. This means that the same key is used to encrypt and decrypt the file (Figure 1).

Figure 1. A symmetrical encryption algorithm requires the originator to communicate the key to the end user (whoever is performing the decryption).

Examples of this type of algorithm are the Data Encryption Standard (DES), whose specification was first published in 1977, Triple DES [also known as TDES or TDEA (Triple Data Encryption Algorithm), which involves using DES three times], and the more sophisticated Advanced Encryption Standard (AES), which was adopted in 2001.

The advantage of this technique is speed due to its relatively low computational requirements. Using a modern computer, encrypting even a large file using a symmetric algorithm takes only seconds, and the time taken to decrypt the file is typically unperceivable to the user.

The Achilles' heel to symmetric encryption is the need to communicate the key. Since the art of cryptography began, this has involved trusted couriers traveling around the world to convey keys to the end users. This approach is obviously not practical in the context of everyday use.

*Asymmetric (Public Key) Encryption:* In 1976, cryptographer Whitfield Diffie and electrical engineer Martin Hellman created a new form of encryption/decryption known as *asymmetric*. The "asymmetric" appellation is applied because the key used to decode the data is different to the key used to encode it. Although the DH (Diffie-Hellman) protocol is still used, a more general and more commonly used approach was described by MIT researchers in 1977; this system is known as RSA based on its discoverers' surnames (Rivest, Shamir, and Adleman).

Asymmetric schemes are also commonly known as*public key encryption*, because they rely on the use of two keys: a *public key* and a *private key* (Figure 2).

Figure 2. An asymmetrical encryption algorithm requires the end user to generate public and private keys, and to provide the public key to the folks performing the encryption.

The idea here is that the public key is generated by the end-user, who makes it available to everyone (or at least, to everyone who needs to know about it). This public key is used for encryption by the originator of the message, but it cannot be used to decrypt the ensuing file; decryption requires access to the private key. In the case of RSA, the public key is the product of two prime numbers, while the private key is one of the prime numbers that was used to create the public key.

In addition to the fact that they are harder to crack than their symmetric cousins, the main advantage of asymmetric schemes is that the key used to decode the file is not being passed around. However, the biggest disadvantage associated with asymmetric approaches is that they are extremely compute-intensive (a large block of data may take hours to encrypt or decrypt).

*Hybrid Encryption:* The term *hybrid encryption* refers to a combined symmetric-asymmetric encryption/decryption flow (Figure 3). First, the originator encrypts the data using an internally generated symmetric key known as the *data key* (Figure 3a). As was previously discussed, this form of encryption is extremely fast, even on large blocks of data. The result from this step is known as the *data block*.

Figure 3. In a hybrid approach, the data is encrypted using a symmetric algorithm, and then the key to the encrypted data is itself encrypted using an asymmetrical algorithm.

Next, the originator takes the data key and encrypts it using an asymmetric algorithm and the end user's public key (Figure 3b). The result of this operation is known as a*key block*. Although this form of encryption is relatively compute-intensive, the data key itself is very small, so the entire process takes only a fraction of a second. Finally, the originator bundles the data block and key block into a single file and communicates this file to the end user (Figure 3c).

Interestingly enough, this hybrid approach is the same technique as that employed by the PGP (Pretty Good Privacy) scheme, which was first published by Phil R. Zimmermann in 1991.

**What is ECC and why do we need it?**

RSA has been around for a long time and is well understood. Unfortunately, hackers (those nefarious rapscallions who wish to break the code and access the data) now have access to phenomenally powerful computers that can crack the smaller keys.

Originally, 512-bit RSA keys were considered to be sufficient. Over time this was increased to 768 bits, then 1,024 bits, and then 2,048 bits. More recently, NIST recommended the combination of 128 bit AES keys with 3,072-bit RSA keys. Meanwhile, the Europeans are even more pessimistic, because they are recommending 128 bit AES keys with 6,000-bit RSA keys.

The problem is the expensive computational requirements associated with using these large keys; moving from a 1,024-bit RSA key to a 2,048-bit key, for example, requires 8x the computations/processing. In the case of the hand-held product arena, personal digital assistant (PDA) and communication devices simply don't have the processing capability to use RSA keys of 3,072 bits and higher.

The solution is to use ECC, which requires much less processing while – at the same time – being much harder to crack. For example, a 256-bit ECC key is as secure as a 3,072-bit RSA key. Similarly, the 521-bit ECC keys used in BlackBerry wireless handheld devices are equivalent to RSA keys with 15,000+ bits!

As it happens, ECC was first introduced in 1985 by Neal Koblitz from the University of Washington and Victor Miller from IBM. So why has it taken so long to catch on? Well, when ECC was first presented it was not well-understood; also, RSA was (a) entrenched and (b) deemed to provide sufficient security to satisfy everyone's requirements at that time. Also, there is an "inertia" to this sort of thing. Until recently, for example, you couldn't find support for ECC in operating systems. But now we're at a "tipping point" at which industry best practices are in the process of being re-defined.

Riding the crest of the wave are the folks at Certicom Inc., who specialize in cryptographic products and solutions and who have been researching and analyzing all aspects of ECC since its introduction (they introduced their first commercial ECC-based products a decade ago in 1997). Their hard work over the years has paid off, because the NSA has determined that Certicom's ECC-based cryptographic key management system is the best-studied system in the world.

In October 2003, the NSA licensed 26 of Certicom's concept patents for digital signature and key management. Later, in February 2005, based largely on Certicom's work, the NSA announced a suite of ECC-based public key algorithms known as*Suite B* (this has been described as the most significant US government cryptographic stance since the introduction of DES in 1977). In 2006, Sun Microsystems started to support ECC in its Solaris operating system, and Microsoft followed suit beginning in 2007 with its Vista operating system.

I for one am very impressed. I tried to wrap my brain around the complexities of using elliptic curves for cryptographic applications and it made my head hurt, but the guys and gals at Certicom have been wrestling with this stuff for more than twenty years. Now their time has come, and I wish them all the best. Until next time, have a good one!

Clive (Max) Maxfield is author of Bebop to the Boolean Boogie (An Unconventional Guide to Electronics) and The Design Warrior's Guide to FPGAs (Devices, Tools, and Flows), Max is also the co-author of How Computers Do Math, featuring the pedagogical and phantasmagorical virtual DIY Calculator (www.DIYCalculator.com).In addition to well-known applications such as online financial transactions, cryptography is increasingly being used to protect a vide variety of data, from the intellectual property (IP) used in electronic designs to personal documents on home computers.

Until recently, the most widely supported public key encryption scheme was RSA, but now there's a new kid on the block:

The reason for the excitement surrounding ECC it that it requires much smaller keys than RSA to provide the equivalent security; also, ECC is extremely computationally efficient providing savings in terms of time, memory, bandwidth, and energy consumption.

In order to understand how ECC fits into the picture, it's worth taking a little time to remind ourselves as to the various cryptographic techniques in widespread use. In order to encrypt a source file, the encryption algorithm uses a special number known as the

Not surprisingly, algorithms and keys are created in such a way as to make "cracking the code" by unauthorized parties as difficult as possible. The point is that, in order to open the encrypted file and access its data, the end user requires access to an appropriate key.

Figure 1. A symmetrical encryption algorithm requires the originator to communicate the key to the end user (whoever is performing the decryption).

Examples of this type of algorithm are the Data Encryption Standard (DES), whose specification was first published in 1977, Triple DES [also known as TDES or TDEA (Triple Data Encryption Algorithm), which involves using DES three times], and the more sophisticated Advanced Encryption Standard (AES), which was adopted in 2001.

The advantage of this technique is speed due to its relatively low computational requirements. Using a modern computer, encrypting even a large file using a symmetric algorithm takes only seconds, and the time taken to decrypt the file is typically unperceivable to the user.

The Achilles' heel to symmetric encryption is the need to communicate the key. Since the art of cryptography began, this has involved trusted couriers traveling around the world to convey keys to the end users. This approach is obviously not practical in the context of everyday use.

Asymmetric schemes are also commonly known as

Figure 2. An asymmetrical encryption algorithm requires the end user to generate public and private keys, and to provide the public key to the folks performing the encryption.

The idea here is that the public key is generated by the end-user, who makes it available to everyone (or at least, to everyone who needs to know about it). This public key is used for encryption by the originator of the message, but it cannot be used to decrypt the ensuing file; decryption requires access to the private key. In the case of RSA, the public key is the product of two prime numbers, while the private key is one of the prime numbers that was used to create the public key.

In addition to the fact that they are harder to crack than their symmetric cousins, the main advantage of asymmetric schemes is that the key used to decode the file is not being passed around. However, the biggest disadvantage associated with asymmetric approaches is that they are extremely compute-intensive (a large block of data may take hours to encrypt or decrypt).

Figure 3. In a hybrid approach, the data is encrypted using a symmetric algorithm, and then the key to the encrypted data is itself encrypted using an asymmetrical algorithm.

Next, the originator takes the data key and encrypts it using an asymmetric algorithm and the end user's public key (Figure 3b). The result of this operation is known as a

Interestingly enough, this hybrid approach is the same technique as that employed by the PGP (Pretty Good Privacy) scheme, which was first published by Phil R. Zimmermann in 1991.

RSA has been around for a long time and is well understood. Unfortunately, hackers (those nefarious rapscallions who wish to break the code and access the data) now have access to phenomenally powerful computers that can crack the smaller keys.

Originally, 512-bit RSA keys were considered to be sufficient. Over time this was increased to 768 bits, then 1,024 bits, and then 2,048 bits. More recently, NIST recommended the combination of 128 bit AES keys with 3,072-bit RSA keys. Meanwhile, the Europeans are even more pessimistic, because they are recommending 128 bit AES keys with 6,000-bit RSA keys.

The problem is the expensive computational requirements associated with using these large keys; moving from a 1,024-bit RSA key to a 2,048-bit key, for example, requires 8x the computations/processing. In the case of the hand-held product arena, personal digital assistant (PDA) and communication devices simply don't have the processing capability to use RSA keys of 3,072 bits and higher.

The solution is to use ECC, which requires much less processing while – at the same time – being much harder to crack. For example, a 256-bit ECC key is as secure as a 3,072-bit RSA key. Similarly, the 521-bit ECC keys used in BlackBerry wireless handheld devices are equivalent to RSA keys with 15,000+ bits!

As it happens, ECC was first introduced in 1985 by Neal Koblitz from the University of Washington and Victor Miller from IBM. So why has it taken so long to catch on? Well, when ECC was first presented it was not well-understood; also, RSA was (a) entrenched and (b) deemed to provide sufficient security to satisfy everyone's requirements at that time. Also, there is an "inertia" to this sort of thing. Until recently, for example, you couldn't find support for ECC in operating systems. But now we're at a "tipping point" at which industry best practices are in the process of being re-defined.

Riding the crest of the wave are the folks at Certicom Inc., who specialize in cryptographic products and solutions and who have been researching and analyzing all aspects of ECC since its introduction (they introduced their first commercial ECC-based products a decade ago in 1997). Their hard work over the years has paid off, because the NSA has determined that Certicom's ECC-based cryptographic key management system is the best-studied system in the world.

In October 2003, the NSA licensed 26 of Certicom's concept patents for digital signature and key management. Later, in February 2005, based largely on Certicom's work, the NSA announced a suite of ECC-based public key algorithms known as

I for one am very impressed. I tried to wrap my brain around the complexities of using elliptic curves for cryptographic applications and it made my head hurt, but the guys and gals at Certicom have been wrestling with this stuff for more than twenty years. Now their time has come, and I wish them all the best. Until next time, have a good one!

In addition to being a hero, trendsetter, and leader of fashion, Max is widely regarded as being an expert in all aspects of computing and electronics (at least by his mother). Max was once referred to as "an industry notable" and a "semiconductor design expert" by someone famous who wasn't prompted, coerced, or remunerated in any way.

### 10 Tips for Streamlining PCB Thermal Design… A High-Level ‘How To’ Guide

### 10 Tips for Predicting Component Temperatures… A High-Level ‘How To’ Guide

### 12 Key Considerations in Enclosure Thermal Design… A High-Level ‘How To’ Guide

### 11 Top Tips for Energy-Efficient Data Center Design and Operation… A High-Level ‘How To’ Guide

©2014 Extension Media. All Rights Reserved. PRIVACY POLICY | TERMS AND CONDITIONS