Which of the following is a violation of the right to privacy?

Vic (J.R.) Winkler, in Securing the Cloud, 2011

Privacy and Confidentiality Concerns

Beyond the information asset risks we discussed above, we may be processing, storing, or transmitting data that is subject to regulatory and compliance requirements. When data falls under regulatory or compliance restrictions, our choice of cloud deployment (be it private, hybrid, or public) hinges on an understanding that the provider is fully compliant. Otherwise one will risk violating privacy, regulatory, or other legal requirements. This obligation usually falls on the tenant or user. It should go without saying that the implications for maintaining the security of information are significant when it comes to privacy, business, and national security information.

Privacy violations occur often enough outside cloud computing for us to be concerned about any system—cloud-based or traditional—storing, processing, or transmitting such sensitive information. In 2010, several cloud privacy information exposures occurred with a number of cloud-based services, including Facebook, Twitter, and Google.H

Privacy concerns with the cloud model are not fundamentally new. As a tenant with legal privacy obligations, your handling of privacy information is not going to be different if you use a cloud. Just as you would not store such information on a server that lacked adequate controls, you wouldn't select any cloud provider without verifying that they meet the same benchmarks for how they protect data at rest, in transmission, or while it is processed. That is not to say that your policy may quite reasonably shun the use of any external provider managing such information for you, cloud included. It also bears pointing out that while there may be a perception that the computer on your desk is safer than one that is in a public cloud, unless you are taking unusual technical and procedural precautions with your desktop computer, it is more apt to be the one with the weaker security. But safety and governance are two separate issues, and as part of due diligence, you will need to fully understand a provider's privacy governance along with their security practices and guidelines.

As with personal information subject to privacy laws, classes of business information, and national security information are also subject to regulation and law. National security information and processes benefit from a strong and developed corpus of law, regulation, and guidance. There derive from public law and flow downward through each individual agency or officially responsible entity. Although cloud is a relatively new model, a studied examination of the available guidance should be ample to absolutely restrict any classified information from residing in a public cloud. The area of probable concern lies with other government functions that do not process sensitive or classified data. Suffice it to say, when you examine the opportunity for use of public clouds there are many distinct and separate lines of business between a national government down to a local jurisdiction. Given the size of government and the number of levels and jurisdictions, it seems as though government itself could operate a series of community clouds for its exclusive use thereby obtaining the benefits and avoiding the issues with cohabitation in a public cloud. On the other hand, if government is to use a public cloud, then that service must fully meet the interests of the tenant and all applicable regulations and laws. It is possible that a tenant can implement additional security controls that meet regulatory or legal requirements even when an underlying public IaaS or PaaS that does not fully meet those same requirements. However, it must be understood that the range of additional controls that can be added by a tenant are limited and cannot overcome many gaps in some public cloud services.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495929000038

A Taxonomy of Software Integrity Protection Techniques

Mohsen Ahmadvand, ... Florian Kelbert, in Advances in Computers, 2019

Abstract

Tampering with software by man-at-the-end (MATE) attackers is an attack that can lead to security circumvention, privacy violation, reputation damage, and revenue loss. In this model, adversaries are end users who have full control over software as well as its execution environment. This full control enables them to tamper with programs to their benefit and to the detriment of software vendors or other end users. Software integrity protection research seeks for means to mitigate those attacks. Since the seminal work of Aucsmith, a great deal of research effort has been devoted to fight MATE attacks, and many protection schemes were designed by both academia and industry. Advances in trusted hardware, such as TPM and Intel SGX, have also enabled researchers to utilize such technologies for additional protection. Despite the introduction of various protection schemes, there is no comprehensive comparison study that points out advantages and disadvantages of different schemes. Constraints of different schemes and their applicability in various industrial settings have not been studied. More importantly, except for some partial classifications, to the best of our knowledge, there is no taxonomy of integrity protection techniques. These limitations have left practitioners in doubt about effectiveness and applicability of such schemes to their infrastructure. In this work, we propose a taxonomy that captures protection processes by encompassing system, defense and attack perspectives. Later, we carry out a survey and map reviewed papers on our taxonomy. Finally, we correlate different dimensions of the taxonomy and discuss observations along with research gaps in the field.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0065245817300591

Confidentiality and Integrity and Privacy Requirements in the iot

Tyson Macaulay, in RIoT Control, 2017

Nonintrinsic Privacy

Privacy is not intrinsic to the IoT. That is to say: where you find an IoT system or service, do not assume there is a potential privacy violation lurking.

Privacy—like any other potential requirement or vulnerability in a given IoT system or service—is something to be assessed rather than assumed. As we will discuss later in this book, the potential to inflict damage on the IoT related to establishing inappropriate, hard-and-fast privacy requirements is big.

The massive amount of data present in the IoT as a whole, across all its elements and services without regard to difference of ownership and management, physical and logical storage, means there is no question that the IoT, en masse, is potentially, massively personal. If you can access, correlate, and associate identity with activity logs and events in the IoT, you will pretty much be able to write a biography that will shock mothers and end marriages. The issue is that this is easier said than done. More on this to follow.

While there is plenty of risk associated with privacy in the IoT, this risk needs to kept in perspective and most importantly understood in the context of the requirements derived from both regulation (not a great source of requirements, in reality) and customer expectations (probably more important than regulation, in effect).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124199712000078

Privacy Preservation in Smart Cities

Youyang Qu, ... Shui Yu, in Smart Cities Cybersecurity and Privacy, 2019

5.3 Privacy by Design

“Privacy by design” is a positive strategy to cope with privacy problems in Smart City, which mainly has the following principles: proactive action instead of a remedial protection strategy after privacy violations; privacy embedded into the design; full functionality with full privacy protection, respect of the privacy of users, protecting privacy during the whole lifecycle of data, and so forth. Recently, there have been some efforts that combine this principle to design new systems [66]. However, Perera et al. [2] argue that most current “privacy-by-design” frameworks are unavailable to provide specific guidance that enables engineers to design IoT applications.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128150320000068

Biometrics and The Future

Stuart Sumner, in You: for Sale, 2016

So What are the Risks?

This book has dealt extensively with government surveillance. It’s pervasive, invasive, and it’s not going away any time soon. But couple this obsession with surveillance together with biometrics, and the potential for privacy violations increases exponentially.

For example, the city of New York operates a network of around 3,000 cameras called the ‘Domain Awareness System’, essentially a CCTV network much the same as that used in London and many other cities. If a crime is committed and the police know roughly where and when it happened, they can scan through the relevant recording histories to find it.

But what if systems such as this were equipped with facial recognition technology? Anyone sufficiently motivated would be able very simply to track you throughout your daily routine.

“A person who lives and works in lower Manhattan would be under constant surveillance,” Jennifer Lynch, an attorney at the Electronic Frontier Foundation has been widely quoted as saying.

And this threat is fast becoming a reality. The Department of Homeland Security is working on a five billion dollar project to develop what it calls the Biometric Optical Surveillance System (which becomes no less disturbing when you refer to it by its acronym: BOSS). This system aims to be able to recognize people (and it is able to do so because organizations like the NSA have been harvesting people’s photographs for years and building vast databases) with 90 per cent certainty at a range of 100 meters, and it has been predicted to be operational by 2018.

Things become more worrying still once your DNA profile gets digitized. For one thing various commercial bodies including insurers will want to get hold of the data to scan your profile for risks and revenue-generating opportunities (‘Hi there, we’ve noticed that you have a genetic predisposition towards colon trouble, why not try our new herbal range of teas, proven to ease such complaints in over 80 per cent of cases’ is a fake, but yet disturbingly believable example of what could happen). Worse still, what if some government agency one day purports to have found a genetic sequence indicating a propensity towards crime?

Alternatively, what happens when a malicious party appropriates your genetic code? You can change your password, or the locks on your front door, but your DNA sequence?

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128034057000102

Crowdsensing and Privacy in Smart City Applications

Raj Gaire, ... Surya Nepal, in Smart Cities Cybersecurity and Privacy, 2019

5.5 Privacy Pitfalls of Authentication

According to Schneider [58], textbook material from a yet-to-be-published book on Cybersecurity by Prof. Fred Schneider, a leading security expert from Cornell University who has championed several cybersecurity guidelines in the United States, authentication, when undertaken injudiciously, can lead to privacy violations for the following reasons.

First, in authenticating somebody, you learn their identity, and thus, you also learn an associated set of attributes, some of which could be considered personal information. Thus, authentication could lead to the revelation of personal information.

Second, a threat to privacy arises when authentication is used to validate participants in some action. It is possible that participation is deemed private (e.g., certain medical purchases or medical procedures); thus the side effect of authentication is to associate personal information with an identity. This problem is compounded when the same identifier is used to authenticate an individual in connection to multiple actions, leading to the capabilities of third parties to be able to connect seemingly unrelated actions with a single individual, and then make inferences about associated additional attributes of that individual.

Third, in a sense, requirement of authentication implicitly institutes a form of authorization. Thus, the prospect of undergoing authentication inhibits people from engaging in activities they fear could be misconstrued, deemed inappropriate, or lead to retribution. The concern here is not that there is an erosion of basic freedoms when authentication is required, but that this erosion is inadvertent. The concern is that the policy—not side effects of a system's construction—should be what dictates who may engage in what activities, and authorization—not authentication—mechanisms should be what implements such policy.

Finally, an authentication system collects, and possibly stores, information for subsequent use. Note that information collected without consent should never be allowed to be abused (that includes linking, etc.).

Consequently, widespread deployment of authentication mechanisms increases plausible privacy violations in three ways: (i) personal information could be abused by the agency collecting it, (ii) stored personal information could be stolen, and (iii) having personal information further increases the risk of inference by linking shared identities or other shared attributes.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128150320000056

Mobile Security

S. Tully, Y. Mohanraj, in Mobile Security and Privacy, 2017

12.2.1 Code Encryption

There are multiple reasons for a developer or an organization to encrypt the code used in their mobile application. From an operating system perspective, it helps in maintaining system integrity by providing a facility to detect code integrity violation. From a user perspective, this protects them from information theft or privacy violation.

Code encryption is not implemented by default by all mobile operating system providers. iOS, for example, does binary encryption by default; however, this is not necessarily the case with Android-based devices. It has to be noted that code signing is not the same as code encryption, as both the major mobile application curators require the apps to be signed to varying degrees. We will discuss this further later on.

Code encryption may help in preventing reverse engineering or code modification; however, the effectiveness of this control has been contested by many practitioners. While poor key management practices and implementation of insecure algorithms heavily degrade the effectiveness of code encryption, the question being raised by these practitioners is what is the point in encrypting code when it must be decrypted on the device before loading it into the processor for execution. At this point in runtime, taking a snapshot of the decrypted code in memory is trivial at best.

Identity-based code execution utilizing code signing techniques may very well be addressing most of the security and privacy concerns and could overcome many of the limitations not fully addressed by code encryption alone. As with any technical control, it is imperative that the intent is not lost during implementation. Certificates and key management techniques should cover appropriate process and technical controls to realize benefits of implementing code signing. Self-signed certificates and poorly managed private keys offer no real benefit of code signing.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012804629600002X

Facing the Cybercrime Problem Head-On

Littlejohn Shinder, Michael Cross, in Scene of the Cybercrime (Second Edition), 2008

U.S. Federal and State Statutes

We have already mentioned the somewhat broad definition of computer crime adopted by the U.S. DOJ. Individual federal agencies (and task forces within those agencies) have their own definitions. For example, the FBI investigates violations of the federal Computer Fraud and Abuse Act, which lists specific categories of computer and network-related crimes:

Public switched telephone network (PSTN) intrusions

Major computer network intrusions

Network integrity violations

Privacy violations

Industrial/corporate espionage

Software piracy

Other crimes in which computers play a major role in committing the offense

USA PATRIOT Act and Protect America Act

Many aspects of the Computer Fraud and Abuse Act were amended by the USA PATRIOT Act, which increased penalties and allowed the prosecution of individuals who intended to cause damage, as opposed to those actually causing damage. The USA PATRIOT Act is an acronym for Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism. As its clumsy and cumbersome title indicates, it was created after the September 11, 2001 terrorist attacks on the United States, and was pushed through the U.S. Senate to give law enforcement enhanced authority over monitoring private communications and accessing personal information.

Another act that was signed into law by President Bush in August 2007 is the Protect America Act (nicknamed by many as PATRIOT II). It also provides greater authority to law enforcement, and allows the government to perform such actions as:

Access the credit reports of a citizen without a subpoena

Conduct domestic wiretaps without a court order for 15 days after an attack on the United States or congressional authorization of use of force

Criminalize the use of encryption software used in the commission or planning of a felony

Extend authorization periods used for wiretaps or Internet surveillance

The focus of the Protect America Act was to update the Foreign Surveillance Act and deal with shortcomings in the law that don't address modern technology. However, these acts were controversial enough to require the U.S. DOJ to create http://www.lifeandliberty.gov, a Web site designed to provide information and disclaim arguments against these two acts.

State Laws

Title 18 of the U.S. Code, in Chapter 47, Section 1030, defines a number of fraudulent and related activities that can be prosecuted under federal law in connection with computers. Most pertain to crimes involving data that is protected under federal law (such as national security information), involving government agencies, involving the banking/financial system, or involving intrastate or international commerce or “protected” computers. Defining and prosecuting crimes that don't fall into these categories usually is the province of each state.

Most U.S. states have laws pertaining to computer crime. These statutes are generally enforced by state and local police and might contain their own definitions of terms. For example, the Texas Penal Code's Computer Crimes section (which is available to view at http://tlo2.tlc.state.tx.us/statutes/pe.toc.htm) defines only two offenses:

Online Solicitation of a Minor (Texas Penal Code Section 33.021).

Breach of Computer Security (Texas Penal Code Section 33.02), which is defined as “knowingly accessing a computer, computer network, or computer system without the effective consent of the owner.” The classification and penalty grade of the offense are increased according to the dollar amount of loss to the system owner or benefit to the offender.

Section 502 of the California Penal Code (Section 502), on the other hand, defines a list of eight acts that constitute computer crime, including altering, damaging, deleting, or otherwise using computer data to execute a scheme to defraud; deceiving, extorting, or wrongfully controlling or obtaining money, property, or data; using computer services without permission; disrupting computer services; assisting another in unlawfully accessing a computer; or introducing contaminants (such as viruses) into a system or network. Additional sections of the penal code also address other computer and Internet-related crimes, such as those dealing with child pornography and other crimes that may incorporate the use of a computer. However, as stated earlier, these are not necessarily dependent on the use of computers or other technologies.

Depending on the state, the definition of computer crime under state law differs. Once again, the jurisdictional question rears its ugly head. If the multijurisdictional nature of cybercrime prevents us from even defining it, how can we expect to effectively prosecute it?

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597492768000017

Conducting a Privacy Impact Assessment

Laura P. Taylor, in FISMA Compliance Handbook, 2013

Privacy laws, regulations, and rights

The Homeland Security Act of 20021 requires all federal departments and agencies to appoint a Privacy Officer and further assigns certain responsibilities to the Privacy Officer. (In some agencies, the Privacy Officer is known as the Senior Agency Official for Privacy.) Conducting a Privacy Impact Assessment is one of the responsibilities. Another responsibility of the Privacy Officer is to ensure that systems of record adhere to the Privacy Act. Aside from managing the internal oversight of privacy, the senior privacy official is supposed to prepare

… a report to Congress on an annual basis on activities of the Department that affect privacy, including complaints of privacy violations, implementation of the Privacy Act of 1974, internal controls, and other matters.

On May 22, 2006, after it was thought that private information of 26 million US Veterans was stolen on a USB flash drive, Clay Johnson III, the Acting Director of the OMB, issued an important memorandum on privacy to heads of federal departments and agencies. In the memo, Mr. Johnson reminded heads of departments and agencies that, “The loss of personally identifiable information can result in substantial harm, embarrassment, and inconvenience to individuals and may lead to identity theft or other fraudulent use of the information. Because Federal agencies maintain significant amounts of information concerning individuals, we have a special duty to protect that information from loss and misuse.”

Mr. Johnson goes on to cite an excerpt from the Privacy Act with a reminder that each federal department or agency should establish

… rules of conduct for persons involved in the design, development, operation, or maintenance of any system of records, or maintaining any record, and instruct each such person with respect to such rules and the requirements of [the Privacy Act], including any other rules and procedures adopted pursuant to this [Act] and the penalties for noncompliance, and

appropriate administrative, technical and physical safeguards to insure the security and confidentiality of records and to protect against any anticipated threats or hazards to their security or integrity which could result in substantial harm, embarrassment, inconvenience or unfairness to any individual on whom information is maintained. (5 U.S.C. § 552a(e)(9)-(10))

The memo further states that heads of departments and agencies should: conduct a review of privacy policies and processes, take corrective action as appropriate to ensure that the agency has adequate safeguards to prevent the intentional or negligent misuse of, or unauthorized access to, personally identifiable information. Mr. Johnson requested that heads of departments and agencies should include the results of the review with FISMA compliance reports. The memo in its entirety can be viewed at http://www.whitehouse.gov/omb/memoranda/fy2006/m-06-15.pdf.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124058712000129

Google+

Jennifer Golbeck, in Introduction to Social Media Investigation, 2015

Google Buzz

To compete in the growing social networking space, Google launched Google Buzz in 2010. The launch is almost universally agreed to have been a disaster because of major privacy issues. Google automatically created a Google Buzz account for everyone who used Gmail, Google's email system. They opted all these users in, turned on their accounts, and publicly listed the names of the people each person corresponded with most frequently in Gmail. Thus, without any action by a user, Google shared all this information with the world. The privacy implications of this became clear quickly.

Business Insider lists a few troubling possible scenarios.1 A husband's profile shows that he has had a lot of contact with an ex. A boss sees that a competitor is a top contact of his employee. What about the case where journalists are emailing confidential, anonymous sources? Those sources could be revealed. Physicians and therapists who used Gmail to correspond with their patients would have their patients' identities revealed as well—a legal violation and a privacy violation.

One case where this had implications was from a woman who had a serious need to keep her information private. She wrote the following about her problems2:

I use my private Gmail account to email my boyfriend and my mother. There's a BIG drop-off between them and my other “most frequent” contacts.

You know who my third most frequent contact is? My abusive ex-husband.

Which is why it's SO EXCITING, Google, that you AUTOMATICALLY allowed all my most frequent contacts access to my Reader, including all the comments I've made on Reader items, usually shared with my boyfriend, who I had NO REASON to hide my current location or workplace from, and never did.

My other most frequent contacts? Other friends of [boyfriend] Flint's.

Oh, also, people who email my ANONYMOUS blog account, which gets forwarded to my personal account. They are frequent contacts as well. Most of them, they are nice people. Some of them are probably nice but a little unbalanced and scary. A minority of them—but the minority that emails me the most, thus becoming FREQUENT—are psychotic men who think I deserve to be raped because I keep a blog about how I do not deserve to be raped, and this apparently causes the Hulk rage.

Google eventually corrected many of these issues, but in a sense, the damage was done. The public outcry at the launch of the site and discussion of major privacy flaws led to users having low trust in the site. It was shut down after less than two years.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128016565000135

What is an example of a violation of privacy?

Common invasion of privacy torts (or wrongful acts) against businesses include misusing a person's statements for marketing purposes, publishing someone's likeness without permission, and making email or telephone communications without the opportunity for the recipient to opt out.

What is the violation of privacy?

A privacy violation occurs when an unintended person learns about someone elses private information.

What is an example of the right to privacy?

For example, individuals may assert a privacy right to be “let alone” when the press reports on their private life or follows them around in an intrusive manner on public and private property.

What is the right to privacy?

Legally, the right of privacy is a basic law which includes: The right of persons to be free from unwarranted publicity. Unwarranted appropriation of one's personality. Publicizing one's private affairs without a legitimate public concern.