-----Original Message----- From: users-bounces@lists.fedoraproject.org [mailto:users-bounces@lists.fedoraproject.org] On Behalf Of Fred Smith Sent: Friday, June 28, 2013 3:42 PM To: users@lists.fedoraproject.org Subject: retrofitting LUKS encryption on installed system
I've got a F19 installation that I'd like to turn into a fully encrypted system with LUKS.
There are many howtos on the web for encrypting a partition, but they all show doing it to /home. -----Original Message-----
No, just re-install. One partition with /boot and another with an encrypted volume-group, holding /, swap and the rest.
But before embarking on that trip, do you really need full disk encryption? I mean, the content of /usr is on any fedora-cd ;-) And when up-and-running, everything is unlocked.
The only valid reason I can think about, is that other people have physically access to your machine and could get root-access by booting from cd/dvd, and might alter your system.
It surely works, but at a performance price. And the certainty that you have to enter the LUKS-key each time you boot.
______________________________________________________________________ Dit bericht kan informatie bevatten die niet voor u is bestemd. Indien u niet de geadresseerde bent of dit bericht abusievelijk aan u is toegezonden, wordt u verzocht dat aan de afzender te melden en het bericht te verwijderen. De Staat aanvaardt geen aansprakelijkheid voor schade, van welke aard ook, die verband houdt met risico's verbonden aan het electronisch verzenden van berichten.
This message may contain information that is not intended for you. If you are not the addressee or if this message was sent to you by mistake, you are requested to inform the sender and delete the message. The State accepts no liability for damage of any kind resulting from the risks inherent in the electronic transmission of messages.
On 28.06.2013, J.Witvliet@mindef.nl wrote:
The only valid reason I can think about, is that other people have physically access to your machine
If somebody has physical access to your machine, you're hosed. A hardware keylogger could have been installed, a camera which spies on you and captures your passphrase and other evil things.
On Fri, Jun 28, 2013 at 05:21:34PM +0200, J.Witvliet@mindef.nl wrote:
-----Original Message----- From: users-bounces@lists.fedoraproject.org [mailto:users-bounces@lists.fedoraproject.org] On Behalf Of Fred Smith Sent: Friday, June 28, 2013 3:42 PM To: users@lists.fedoraproject.org Subject: retrofitting LUKS encryption on installed system
I've got a F19 installation that I'd like to turn into a fully encrypted system with LUKS.
There are many howtos on the web for encrypting a partition, but they all show doing it to /home. -----Original Message-----
No, just re-install. One partition with /boot and another with an encrypted volume-group, holding /, swap and the rest.
But before embarking on that trip, do you really need full disk encryption? I mean, the content of /usr is on any fedora-cd ;-) And when up-and-running, everything is unlocked.
The only valid reason I can think about, is that other people have physically access to your machine and could get root-access by booting from cd/dvd, and might alter your system.
Well, I have employer VPN information, ssh keys allowing me to ssh into my own home system, and sometimes customer's VPN (and possibly other) information on it too, so for all those reasons it has seemed like encrypting the whole thing would make sense.
Fred
Fred Smith wrote:
On Fri, Jun 28, 2013 at 05:21:34PM +0200, J.Witvliet@mindef.nl wrote:
-----Original Message----- From: users-bounces@lists.fedoraproject.org [mailto:users-bounces@lists.fedoraproject.org] On Behalf Of Fred Smith Sent: Friday, June 28, 2013 3:42 PM To: users@lists.fedoraproject.org Subject: retrofitting LUKS encryption on installed system
I've got a F19 installation that I'd like to turn into a fully encrypted system with LUKS.
There are many howtos on the web for encrypting a partition, but they all show doing it to /home. -----Original Message-----
No, just re-install. One partition with /boot and another with an encrypted volume-group, holding /, swap and the rest.
But before embarking on that trip, do you really need full disk encryption? I mean, the content of /usr is on any fedora-cd ;-) And when up-and-running, everything is unlocked.
The only valid reason I can think about, is that other people have physically access to your machine and could get root-access by booting from cd/dvd, and might alter your system.
Well, I have employer VPN information, ssh keys allowing me to ssh into my own home system, and sometimes customer's VPN (and possibly other) information on it too, so for all those reasons it has seemed like encrypting the whole thing would make sense.
Before you move heaven and earth to encrypt everything, is the data small and all in one directory? Sounds like it, you could use the encfs FUSE module to have just the one directory encrypted. That has a provision to unmount if the directory is unused for a time, which addresses "when up-and-running, everything is unlocked" you mentioned. A few minutes after you give the password, if you don't use the data it unmounts.
Fred
On 28.06.2013 17:21, J.Witvliet@mindef.nl wrote:
It surely works, but at a performance price. And the certainty that you have to enter the LUKS-key each time you boot.
Intel Sandy/Ivy Bridge processors and later (AMD also) have something called AES-NI which significantly speeds up disk encryption. I haven't done any benchmarks but I see no difference between encrypted and plain LVM in everyday use.
User can unlock LUKS volume using key on SD card or any other media that can be mounted during system boot. So no passphrase is needed every time system is rebooted.
Mateusz Marzantowicz
On Fri, Jun 28, 2013 at 10:44:09PM +0200, Mateusz Marzantowicz wrote:
On 28.06.2013 17:21, J.Witvliet@mindef.nl wrote:
It surely works, but at a performance price. And the certainty that you have to enter the LUKS-key each time you boot.
Intel Sandy/Ivy Bridge processors and later (AMD also) have something called AES-NI which significantly speeds up disk encryption. I haven't done any benchmarks but I see no difference between encrypted and plain LVM in everyday use.
that would be lovely. but this, unfortunately, is a dual core Atom processor, so it's gonna be dog slow.
User can unlock LUKS volume using key on SD card or any other media that can be mounted during system boot. So no passphrase is needed every time system is rebooted.
Mateusz Marzantowicz
Mateusz Marzantowicz wrote:
On 28.06.2013 17:21, J.Witvliet@mindef.nl wrote:
It surely works, but at a performance price. And the certainty that you have to enter the LUKS-key each time you boot.
Intel Sandy/Ivy Bridge processors and later (AMD also) have something called AES-NI which significantly speeds up disk encryption. I haven't done any benchmarks but I see no difference between encrypted and plain LVM in everyday use.
I just discovered that KVM doesn't seem to pass that flag on to virtual machines, which seems like serious suckage. May be a hardware thing, of course.
User can unlock LUKS volume using key on SD card or any other media that can be mounted during system boot. So no passphrase is needed every time system is rebooted.
Leaving the card in the machine kind of defeats the purpose, doesn't it?
And adds to the possibility of forgetting to remove the card when you walk away. Security and convenience are to some extent mutually exclusive.
Mateusz Marzantowicz
Am 29.06.2013 22:23, schrieb Bill Davidsen:
Mateusz Marzantowicz wrote:
On 28.06.2013 17:21, J.Witvliet@mindef.nl wrote:
It surely works, but at a performance price. And the certainty that you have to enter the LUKS-key each time you boot.
Intel Sandy/Ivy Bridge processors and later (AMD also) have something called AES-NI which significantly speeds up disk encryption. I haven't done any benchmarks but I see no difference between encrypted and plain LVM in everyday use.
I just discovered that KVM doesn't seem to pass that flag on to virtual machines, which seems like serious suckage. May be a hardware thing, of course
this has nothing to do with the hardware the hardware has AES-NI or has not
VMware vSphere passes the flag to the guest
cat /proc/cpuinfo | grep aes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 popcnt aes hypervisor lahf_lm ida arat epb pln pts dtherm
Reindl Harald wrote:
Am 29.06.2013 22:23, schrieb Bill Davidsen:
Mateusz Marzantowicz wrote:
On 28.06.2013 17:21, J.Witvliet@mindef.nl wrote:
It surely works, but at a performance price. And the certainty that you have to enter the LUKS-key each time you boot.
Intel Sandy/Ivy Bridge processors and later (AMD also) have something called AES-NI which significantly speeds up disk encryption. I haven't done any benchmarks but I see no difference between encrypted and plain LVM in everyday use.
I just discovered that KVM doesn't seem to pass that flag on to virtual machines, which seems like serious suckage. May be a hardware thing, of course
this has nothing to do with the hardware the hardware has AES-NI or has not
So far you're right.
VMware vSphere passes the flag to the guest
cat /proc/cpuinfo | grep aes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 popcnt aes hypervisor lahf_lm ida arat epb pln pts dtherm
And right again. Unfortunately I didn't say or mean vSphere, but rather KVM, the facility used by qemu-kvm to run virtual machines.
Hardware CPU: vendor_id : GenuineIntel model name : Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid
On 2.6.32-358.11.1.el6.i68 VM: vendor_id : GenuineIntel model name : QEMU Virtual CPU version 1.0.1 flags : fpu de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm unfair_spinlock pni cx16 popcnt hypervisor lahf_lm
But on 3.9.6-200.fc18.x86_64 VM: vendor_id : GenuineIntel model name : QEMU Virtual CPU version 1.0.1 flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl pni cx16 popcnt hypervisor lahf_lm
Other than the flag name change, neither VM has aes set, I assume the flag is blocked for security, although I don't see bugs about it.
Anyway, switching all our servers to something else at this time is not even a worth discussion, so my note was just a warning for people using the KVM tools included in Fedora.
Am 29.06.2013 23:12, schrieb Bill Davidsen:
And right again. Unfortunately I didn't say or mean vSphere, but rather KVM, the facility used by qemu-kvm to run virtual machines.
Hardware CPU: vendor_id : GenuineIntel model name : Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid
On 2.6.32-358.11.1.el6.i68 VM: vendor_id : GenuineIntel model name : QEMU Virtual CPU version 1.0.1 flags : fpu de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm unfair_spinlock pni cx16 popcnt hypervisor lahf_lm
But on 3.9.6-200.fc18.x86_64 VM: vendor_id : GenuineIntel model name : QEMU Virtual CPU version 1.0.1 flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl pni cx16 popcnt hypervisor lahf_lm
Other than the flag name change, neither VM has aes set, I assume the flag is blocked for security, although I don't see bugs about it.
Anyway, switching all our servers to something else at this time is not even a worth discussion, so my note was just a warning for people using the KVM tools included in Fedora
looks like KVM is still far behind VMware
"model name: QEMU Virtual CPU version 1.0.1" what the hell - on VMware you have the same CPU as the host and only "VMware EVC" is filtering CPU capabilities to provide relieable hot-migration between hosts by make only the flags of the oldest CPU in the cluster visible to guests
that's why a VMwar eguest has around 905-98 % of the native performance because there is only few binary translation and most instrcutions are passed 1:1
Reindl Harald wrote:
Am 29.06.2013 23:12, schrieb Bill Davidsen:
And right again. Unfortunately I didn't say or mean vSphere, but rather KVM, the facility used by qemu-kvm to run virtual machines.
Hardware CPU: vendor_id : GenuineIntel model name : Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid
On 2.6.32-358.11.1.el6.i68 VM: vendor_id : GenuineIntel model name : QEMU Virtual CPU version 1.0.1 flags : fpu de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm unfair_spinlock pni cx16 popcnt hypervisor lahf_lm
But on 3.9.6-200.fc18.x86_64 VM: vendor_id : GenuineIntel model name : QEMU Virtual CPU version 1.0.1 flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl pni cx16 popcnt hypervisor lahf_lm
Other than the flag name change, neither VM has aes set, I assume the flag is blocked for security, although I don't see bugs about it.
Anyway, switching all our servers to something else at this time is not even a worth discussion, so my note was just a warning for people using the KVM tools included in Fedora
looks like KVM is still far behind VMware
"model name: QEMU Virtual CPU version 1.0.1" what the hell - on VMware you have the same CPU as the host and only "VMware EVC" is filtering CPU capabilities to provide relieable hot-migration between hosts by make only the flags of the oldest CPU in the cluster visible to guests
That's why we use KVM, migrations may not be within a cluster. Or be real time "migrations" as you are thinking of it, but rather may involve being backed up until the next time there is a support need for the machine. Different environment, different goals.
that's why a VMwar eguest has around 905-98 % of the native performance because there is only few binary translation and most instrcutions are passed 1:1
And as I remember if there was one old machine in the cluster you wouldn't have the aes instruction either. That's from docs, haven't tried VMware in a very long time.
Am 29.06.2013 23:38, schrieb Bill Davidsen:
Reindl Harald wrote:
"model name: QEMU Virtual CPU version 1.0.1" what the hell - on VMware you have the same CPU as the host and only "VMware EVC" is filtering CPU capabilities to provide relieable hot-migration between hosts by make only the flags of the oldest CPU in the cluster visible to guests
That's why we use KVM, migrations may not be within a cluster. Or be real time "migrations" as you are thinking of it, but rather may involve being backed up until the next time there is a support need for the machine. Different environment, different goals
the goal of virtualization in production is live-migartion and failover this way you hve zero downtime at host-upgrades / reboots
that's why a VMwar eguest has around 95-98 % of the native performance because there is only few binary translation and most instrcutions are passed 1:1
And as I remember if there was one old machine in the cluster you wouldn't have the aes instruction either. That's from docs, haven't tried VMware in a very long time
that is why i mentioned "VMware EVC"
you hardly need this because any running process inside a virtual machine will crash if it is using CPU instructions which are not available on the CPU of the target host after a migartion and with "VMware DRS" the cluster automatically starts live-migartions if one host is overloaded while others are idle to spread the load of the guests in a useful manner to the available hosts
virtualization is the base of my daily job and afer working some time with this features you never ever setup a server on bare metal for gain a few percent more peformance with no safety net or way too complex HA setups inside the machines itself inseatd have them a layer deeper than your production OS
well, i love opensource and on the guests Fedora/CentOS is running but until now there is no opensource solution which can beat VMware on certified hardware with proper support
Reindl Harald wrote:
Am 29.06.2013 23:38, schrieb Bill Davidsen:
Reindl Harald wrote:
"model name: QEMU Virtual CPU version 1.0.1" what the hell - on VMware you have the same CPU as the host and only "VMware EVC" is filtering CPU capabilities to provide relieable hot-migration between hosts by make only the flags of the oldest CPU in the cluster visible to guests
That's why we use KVM, migrations may not be within a cluster. Or be real time "migrations" as you are thinking of it, but rather may involve being backed up until the next time there is a support need for the machine. Different environment, different goals
the goal of virtualization in production is live-migartion and failover this way you hve zero downtime at host-upgrades / reboots
I see the problem, you have mistaken YOUR goal for THE goal.
OUR goal is to be able to bring up some set of test machines for a short time, and 100% uptime isn't an issue. Being able to bring up machines on whatever hardware is currently in burn-in, or sitting around, IS a goal, it saves us from having to keep a machine around just for that purpose. As long as there is such a machine, we have satisfied our hardware requirements, so we don't need a hot backup or the admin issues a cluster incurs.
On Sat, 2013-06-29 at 23:51 +0200, Reindl Harald wrote:
Am 29.06.2013 23:38, schrieb Bill Davidsen:
Reindl Harald wrote:
"model name: QEMU Virtual CPU version 1.0.1" what the hell - on VMware you have the same CPU as the host and only "VMware EVC" is filtering CPU capabilities to provide relieable hot-migration between hosts by make only the flags of the oldest CPU in the cluster visible to guests
That's why we use KVM, migrations may not be within a cluster. Or be real time "migrations" as you are thinking of it, but rather may involve being backed up until the next time there is a support need for the machine. Different environment, different goals
the goal of virtualization in production is live-migartion and failover this way you hve zero downtime at host-upgrades / reboots
that's why a VMwar eguest has around 95-98 % of the native performance because there is only few binary translation and most instrcutions are passed 1:1
And as I remember if there was one old machine in the cluster you wouldn't have the aes instruction either. That's from docs, haven't tried VMware in a very long time
that is why i mentioned "VMware EVC"
you hardly need this because any running process inside a virtual machine will crash if it is using CPU instructions which are not available on the CPU of the target host after a migartion and with "VMware DRS" the cluster automatically starts live-migartions if one host is overloaded while others are idle to spread the load of the guests in a useful manner to the available hosts
virtualization is the base of my daily job and afer working some time with this features you never ever setup a server on bare metal for gain a few percent more peformance with no safety net or way too complex HA setups inside the machines itself inseatd have them a layer deeper than your production OS
well, i love opensource and on the guests Fedora/CentOS is running but until now there is no opensource solution which can beat VMware on certified hardware with proper support
Ovirt does this for free, as does the Redhat Product RHEV https://gb.redhat.com/products/cloud-computing/virtualization/ Live migration with HA is part of the base package. You don't need to buy an extra subscription.
Junk.
Am 01.07.2013 20:11, schrieb Junk:
On Sat, 2013-06-29 at 23:51 +0200, Reindl Harald wrote:
Am 29.06.2013 23:38, schrieb Bill Davidsen:
Reindl Harald wrote:
"model name: QEMU Virtual CPU version 1.0.1" what the hell - on VMware you have the same CPU as the host and only "VMware EVC" is filtering CPU capabilities to provide relieable hot-migration between hosts by make only the flags of the oldest CPU in the cluster visible to guests
That's why we use KVM, migrations may not be within a cluster. Or be real time "migrations" as you are thinking of it, but rather may involve being backed up until the next time there is a support need for the machine. Different environment, different goals
the goal of virtualization in production is live-migartion and failover this way you hve zero downtime at host-upgrades / reboots
that's why a VMwar eguest has around 95-98 % of the native performance because there is only few binary translation and most instrcutions are passed 1:1
And as I remember if there was one old machine in the cluster you wouldn't have the aes instruction either. That's from docs, haven't tried VMware in a very long time
that is why i mentioned "VMware EVC"
you hardly need this because any running process inside a virtual machine will crash if it is using CPU instructions which are not available on the CPU of the target host after a migartion and with "VMware DRS" the cluster automatically starts live-migartions if one host is overloaded while others are idle to spread the load of the guests in a useful manner to the available hosts
virtualization is the base of my daily job and afer working some time with this features you never ever setup a server on bare metal for gain a few percent more peformance with no safety net or way too complex HA setups inside the machines itself inseatd have them a layer deeper than your production OS
well, i love opensource and on the guests Fedora/CentOS is running but until now there is no opensource solution which can beat VMware on certified hardware with proper support
Ovirt does this for free, as does the Redhat Product RHEV https://gb.redhat.com/products/cloud-computing/virtualization/ Live migration with HA is part of the base package. You don't need to buy an extra subscription
that's all nice, but until now you do not get certified appliances running on it like https://www.barracuda.com/products/spamandvirusfirewall/vx or things like http://www.vmware.com/products/datacenter-virtualization/vsphere/data-protec... out of the box which beats most backup-solutions in efficiency and in case of disaster recovery
until now there are things coming partly close to the VMware ecosystem but i see nothing which is able to beat them in context of *easy* managment to bother only with the stripped down linux guest systems
On 29.06.2013 22:23, Bill Davidsen wrote:
Leaving the card in the machine kind of defeats the purpose, doesn't it?
And adds to the possibility of forgetting to remove the card when you walk away. Security and convenience are to some extent mutually exclusive.
Every security mechanism has it's disadvantages. I only said user has choice between inserting SD card and entering passphrase every time system boots. I prefer to insert/remove SD card - it's faster and I don't have to waste my time waiting for password prompt.
BTW, after you unlock encrypted volumes either way, your system is completely open and vulnerable as if you haven't used and disk encryption. So, does it really make a difference how you unlocked your machine?
Mateusz Marzantowicz
J.Witvliet@mindef.nl wrote:
-----Original Message----- From: users-bounces@lists.fedoraproject.org [mailto:users-bounces@lists.fedoraproject.org] On Behalf Of Fred Smith Sent: Friday, June 28, 2013 3:42 PM To: users@lists.fedoraproject.org Subject: retrofitting LUKS encryption on installed system
I've got a F19 installation that I'd like to turn into a fully encrypted system with LUKS.
There are many howtos on the web for encrypting a partition, but they all show doing it to /home. -----Original Message-----
No, just re-install. One partition with /boot and another with an encrypted volume-group, holding /, swap and the rest.
But before embarking on that trip, do you really need full disk encryption? I mean, the content of /usr is on any fedora-cd ;-) And when up-and-running, everything is unlocked.
The only valid reason I can think about, is that other people have physically access to your machine and could get root-access by booting from cd/dvd, and might alter your system.
If they have secret access they can install evil devices, but if you are protecting against theft (laptops) or someone with a search warrant (NSA) comes and takes your drives.
It surely works, but at a performance price. And the certainty that you have to enter the LUKS-key each time you boot.
The only safe place to store password info is in your head. If one other person has it it's not a secret, so you have to decide if losing the data is worse than having someone else get it. That's a policy decision, on-technical.
Dit bericht kan informatie bevatten die niet voor u is bestemd. Indien u niet de geadresseerde bent of dit bericht abusievelijk aan u is toegezonden, wordt u verzocht dat aan de afzender te melden en het bericht te verwijderen. De Staat aanvaardt geen aansprakelijkheid voor schade, van welke aard ook, die verband houdt met risico's verbonden aan het electronisch verzenden van berichten.
This message may contain information that is not intended for you. If you are not the addressee or if this message was sent to you by mistake, you are requested to inform the sender and delete the message. The State accepts no liability for damage of any kind resulting from the risks inherent in the electronic transmission of messages.