What is the basic formula for risk analysis?

Risk Assessment Techniques for TIRM

Alexander Borek, ... Philip Woodall, in Total Information Risk Management, 2014

Monte carlo simulation

The Monte Carlo simulation is a method that allows you to obtain results when modeling the problem mathematically and/or finding that an analytical solution is too complex. Many software tools are available to assist in helping build Monte Carlo simulations, such as the TIRM pilot software tool presented in Chapter 12.

The Monte Carlo simulation uses algorithms that can be run on any computer that creates a large amount of random numbers of a chosen distribution. First, the elements that should be represented and appropriate distributions are chosen for the simulation. Then, mathematical calculations are defined that should be executed on these elements and the number N of simulation runs has to be determined. The simulation generates N random numbers that follow the defined distribution for each of the elements and executes the calculations N times. The average, variance, and confidence intervals can be then calculated to summarize the results of these calculations.

The model, as presented in Chapter 5, is used to make the risk calculations for TIRM process step B9. The Monte Carlo simulation allows the collection of quantitative inputs for TIRM process steps B1 to B7 in the form of a probability distribution, instead of using exact values, which are often difficult to get. It makes it easier for experts to provide these quantitative inputs. For each input a statistical distribution is chosen. Commonly used distributions for Monte Carlo simulations are the uniform distribution, triangular distribution, and normal distribution. The uniform distribution assumes that values are equally distributed between a lower and upper boundary, which requires the expert to estimate a lower and upper boundary as the input. Additionally, the triangular distribution needs an estimate of the mode (i.e., the most likely value), therefore, values are assumed to be in the triangle between the lower and upper boundaries and the mode. The normal distribution also requires the estimation of a lower and upper value as the input, assuming that these are the points between which 95% of the values are supposed to lie and that values follow a normal distribution curve.

What is the basic formula for risk analysis?
EXAMPLE

The business process representatives cannot give the exact value for the probability of an information quality problem in TIRM process step B3. But, they do note that the probability of the information quality problem is very likely to be between 30% and 50%. Therefore, 30% and 50% are used as the parameters for the uniform distribution. At the end, in process step B9, risk totals are calculated using the Monte Carlo simulation, which simulates the results by using these parameters as the input.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124055476000110

The Security Design Process

Thomas Norman CPP, PSP, CSC, in Integrated Security Systems Design (Second Edition), 2014

Countermeasures Determination

Appropriate countermeasure selection is a process that involves the following steps, which are from the NIST 780 American Petroleum Institute (API) risk analysis methodology. This methodology is one of the most complete and straightforward to use, and it allows for a financial and risk calculation that is most thorough as well as allowing for stakeholder input into the process1:

Define the assets to be protected and characterize the facility where they are located. Facility characterization includes a complete description of the environment, including the physical environment, security environment, and operational environment. Determine the criticality of each major asset and the consequences of the loss of the asset. Consequences can be measured in loss of life or injury, loss of monetary value, environmental damage, and loss of business or business continuity.

Perform a threat analysis. Define both the potential threat actors and the threat vectors (methods and tactics that the threat actors may use to gain entry or stage an attack). Threat actors may include terrorists, activists, and criminals. Criminals may be either economic criminals or violent criminals, such as those who cause workplace violence. Rank the threat actors’ motivation, history, and capabilities. Rank the threats by their ability to harm the assets using the previous criteria.

Review the basic vulnerabilities of all the protected assets to the types of attacks common to the declared threat actors.

Evaluate the existing and natural countermeasures that are already in place or in the existing design of the building or its site. For example, does a storm levy make vehicle entry more difficult? Is existing lighting a deterrent? The difference is the remaining vulnerabilities to protect.

Determine the likelihood of attack:

Determine the probable value of each of the assets to the probable threat actors (asset target value calculation).

Likelihood = threat ranking × asset attractiveness × remaining vulnerabilities.

Calculate the risk of attack: Risk = consequences × likelihood.

Determine additional countermeasures needed to fill the remaining gap in vulnerabilities prioritized by the risk calculation previously (Fig. 8.1).

What is the basic formula for risk analysis?

Figure 8.1. API/NPRA risk calculation.

Determine from what resources the additional countermeasures can be sourced.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128000229000085

Introduction

In GPU Computing Gems Jade Edition, 2012

In this Section

Chapter 23 develops a finite difference approach to value financial derivatives. Starting with a single-factor model, the author develops the model to a two-factor model using the alternating direction implicit scheme. Building on the tri-diagonal solver techniques introduced in Chapter 11, the author develops a PDE solver and shows how it can be applied in financial applications to accelerating pricing and risk calculations.

Chapter 24 uses a Monte Carlo simulation to model credit risk, creating a loss distribution for a large portfolio and enabling detailed analytics in the tail of the distribution. Deviating from the normal one-thread-per-scenario method for embarrassingly parallel algorithms, the authors use multiple threads cooperating on a single scenario to improve the memory characteristics. As a result the performance scales very well and large problem sizes are easily accommodated, enabling significant power and hardware cost saving.

Chapter 25 applies Monte Carlo simulation to market value-at-risk calculation and considers the application from a variety of perspectives to understand where performance can be improved. The authors evolve the application from a naïve implementation by applying algorithmic and high-level optimizations to achieve a significant speedup, thereby enabling such calculations to be performed on- demand rather than overnight.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123859631000411

29th European Symposium on Computer Aided Process Engineering

Eduardo Sánchez-Ramírez, ... Juan Gabriel Segovia-Hernandez, in Computer Aided Chemical Engineering, 2019

3 Results

Although the three objective functions are evaluated, the obtained results are shown in Table 1. All Pareto fronts were obtained after 100,000 evaluations, as afterward, the vector of decision variables did not produce any meaningful improvement. Initially, note the pure distillation schemes present a difference of 32%. The high difference in TAC is high enough to select as the best option the direct sequence between both conventional alternatives. For such reason, since the direct alternative has been identified, only the schemes derived for that configuration are considered for optimization procedure. After the optimization process, some trends among objective function were observed. Note, in Figure 5, when the TAC is evaluated jointly IR, it is clear the competitive connection between both targets, as long as the TAC increase, the individual risk decreases and vice versa. Figure 5 also shows the antagonist behavior between the environmental impact and the individual risk associated with the conventional downstream process. It is possible to obtain process with low environmental impact, however, the probability of individual accident increase and vice versa.

Table 1. Objective function values

Objective FunctionTAC [$ y-1]EI99 [points y-1]IR [P Y-1]
Direct 35,032,419 14,328,558 0.0006686
Indirect 51,155,609 22,627,903 0.0006663
T Coupled 31,360,313 12,559,191 0.0006795
T Equivalent 31,055,124 12,557,857 0.0006684
Intensified 30,536,031 12,407,199 0.0003341

What is the basic formula for risk analysis?

Fig. 5. Pareto fronts between EI99/TAC and TAC/IR for the intensified scheme

Regarding the evaluation of TAC and IR, it is pretty interesting the reason for such reduction on this incident probability. Since IR calculation considers continuous and instantaneous chemical releases, it is clear that as long as internal flows increase, IR will increase because of the quantity of inventory inside the column increase, which it will provoke that if an accident happens the affectation and duration of the events or probability of death will be greater due to there is more mass that is source to fires, explosion, and toxic releases. This is the general tendency, however, note that for IR calculations several physicochemical properties are involved, such as heat of combustion, flammability limits and so on. With this in mind, note that the feed stream to be separated is mainly composed of water, which obviously for its physicochemical properties generates a contrary behavior. In other words, the IR increases with high internal flows (caused by high reflux o high diameters), however in this case of study for almost all sequences, the first columns separate mainly water, consequently, the internal flows are enriched with water. This amount of water solubilizes the other component to separates and its flammability and toxicity decrease.

Furthermore, at second column acetoin and 2,3-BD are separated, but at this instance, the internal flows follow the common IR behavior, because the quantity of water is fewer in comparison with the first column. In this manner, the IR calculation does not generate a proportional behavior with TAC because of this associated conflict. Finally, when is observed the tendency of EI99 evaluated jointly IR in Pareto front of Figure 5, similar behavior is observed. The internal flows play an interesting role already mentioned. However, those flows must be heated. So, in almost all alternatives, the first column separates the water and implies a significant amount of steam, impacting directly on EI99 value. However, the concentration of flammable compounds decreases. On the other hand, the last column to reduce both IR and TAC values is composed mainly by small columns, which indeed needs fewer services than the first column. Table 2 shows the main parameters of the intensified scheme.

Table 2. Intensified scheme 4a) parameters

C1
Number of stages 87
Reflux ratio 0.333
Feed-stage 42
Column diameter (m) 0.675
Distillate (kg h-1) 3,416.07
Side stage 49
Side flow (kmol h-1) 8.155
Condenser duty (kW) 51,507
Reboiler duty (kW) 61,457

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128186343500278

Building a Program from Scratch

Evan Wheeler, in Security Risk Management, 2011

Risk at the Enterprise Level

Before you can even hope to tackle risk management at an enterprise level or integrate your security risk management program into an enterprise level view, you need to convince the organization of the value of a common risk formula.

Common Risk Formula

Whether you are a financial analyst looking at credit risk or a member of the human resources team analyzing the high percentage of critical processes being supported by contractors, you ultimately need to have a common formula or method for calculating risk. Even though the details of the model vary between these functions and you can't expect the financial analyst and the human resources staff to have the exact same criteria for a high-severity risk, at the enterprise level there has to be some way to compare them. In fact, the information security risk models that have been used throughout this book would never be directly applicable to other risk domains such as operations or finance, but the framework for the levels and exposure mapping methodology is reusable. All of the evaluation steps are somewhat modular, in that you could substitute in your own risk calculations into the lifecycle workflow.

It is very important to establish a common language for risk across the organization. You may have different descriptions of a high severity between the domains, but terms like severity, threat, likelihood, exposure, and vulnerability need to be consistent or you will never be able to have productive discussions about priorities across business units.

With a single format for tracking risks and a single calculation method, you can derive a means of normalizing risks identified by different functions at an enterprise level to get a true picture of the organization's posture. A critical risk for the financial liquidity of assets from the accounting team needs to be equivalent to a critical exposure on a Web service from the information security team. Before you bother implementing any actual risk activities or assessments, start by surveying the different risk models already in use within your organization and align them to a common formula and definition for risk terminology.

Enterprise Risk Committee

In most organizations, the Enterprise Risk Committee is made up of senior management or their representatives. All the different functions should be represented, including information security, legal, compliance, HR, operations, finance, and vendor management. Typical characteristics for an enterprise risk committee include the following:

Looks at risks across the entire organization

Most senior management levels

Information security is just one member

Only the highest level risks are reported

Often systemic or thematic risks are highlighted

Usually reports risks to the board of directors

Often, at this level, risks will be broken up by risk domain (such as brand/reputation or legal/regulatory) and then maybe more specific subcategories or risk areas.

The topic of enterprise risk management is beyond the scope of this book, so we will focus on the role of information security in this program. Most importantly, there need to be clear criteria defined for how and when significant risks will be escalated to this group. You will want to start out by only escalating the most serious risks to this level (those rated as critical risk exposure, for example) because you will want to make sure that you direct attention and focus to those issues. If you present too many risks, they will just get lost and the committee will lose focus. As your program matures and you eliminate critical risks, you may start presenting some key lower level risks, but be careful not to appear to be dominating the risk committee. There are risks from many domains that executives need to understand and balance against any security exposures.

It is a good idea to keep the risk committee current with the security posture of the organization. One approach to this is to present a “state of the company” type of report to the risk committee, in which you look at the risk posture and exposure level across the organization. It is essential to realize that information security issues are just one source of risks that need to get prioritized and weighed against other business risks.

Mapping Risk Domains to Business Objectives

A risk domain is a high-level grouping of risk areas that is generally tied to an overall business goal for the organization. For example, a business goal might be to increase the efficiency of service/product delivery to customers. So, an appropriate risk domain might be titled product delivery and would include all aspects of product development, project management, and product rollout, including some security components. In this case, a security risk to product delivery might be that there is no security testing performed until the product is ready for go-live, at which point a penetration test is performed, leaving no time to fix any issues that are discovered.

Some possible business objectives that you might use to define your risk domains are as follows:

Make money (maintain a profit margin)

Don't break any laws/regulations (keep regulators happy)

Stay ahead of our competition

Grow into new markets/diversify your revenue

Increase/protect the brand value

Deliver your products and services as expected

Maintain operational excellence

Notice that we don't categorize information security as its own domain, but rather as a source of risk in many domains. If risk domains are mapped to business goals, then security doesn't make the list. Being secure intentionally isn't listed as a risk domain for the organization because security is generally a component that will contribute to other risk areas like reputation or financial loss. But there are certainly security components within many other business goals, especially in the legal and regulatory functions. Think about the impact to the organization of a security breach from the risk profiling exercises. It all comes down to financial loss, damage to reputation, legal penalties, and so on. Once you realize that the risks identified through your information security risk management program really impact domains not owned by the security team, this will allow you to better align your initiatives and concerns with the priorities of the business.

Operational risk is a typical risk domain for most organizations. This domain might include business continuity concerns such as disasters, service outages, supply chain failures, third-party dependencies, single points of failure, degradation of service, and so on. Typically, the operational domain is concerned with availability, but there are also other information security risks that will present within this domain.

Damage to brand/reputation is often a concern whenever a security risk is identified. The impact to the organization can be difficult to predict in all cases, but it is important to track this as a domain if appropriate. Of course, there are some organizations that are more insulated against reputational impact because customers might not have an alternative. These are all factors to account for in your risk model.

Some potential categories of security risk within each domain are as follows:

Potential for exposure of sensitive information

Potential for failure of a legal/regulatory obligation

Potential for failure of a key process

It is critical to define the organization's business objectives, define the related domains of risk, and tie your security initiatives directly to them. This will make it clear to the business leaders exactly how your efforts are supporting the goals of the organization. Anything that doesn't map to these objectives should be discarded.

Examples of Risk Areas

Within the scope of technology, there are several key risk areas (some examples are listed) that can be used to categorize similar risk types. These groupings of risks help us to identify similar risks in our modeling exercises and for reporting purposes.

Asset management

Business continuity

Change management

Vendors and outsourcing

Privacy and data protection

Physical and environmental

For example, a security risk in the change management area may include unauthorized changes to an environment or it may involve ignoring a designated maintenance window. Information security risks will typically show up in many domains and various risk areas under those domains, so it is important to add risk areas as appropriate for your organization.

Operational, compliance, and legal concerns are usually at the top of the list for security professionals, but we must also consider other factors, like physical threats to outsourced functions or inability to recover data during a disaster situation. These all may fall under the umbrella of information security to identify and oversee, but the accountability lies with other risk domain owners to facilitate the qualification of the risks and oversee the progress of the mitigation plans to address them at the enterprise level.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597496155000141

Proceedings of the 9th International Conference on Foundations of Computer-Aided Process Design

Ahmed Harhara, M. M. Faruque Hasan, in Computer Aided Chemical Engineering, 2019

Calculating an Exchanger’s Tube Rupture Safety Rating (SR)

In order to design a heat exchanger and incorporate an exchanger’s ability to withstand a tube rupture, a metric must be developed in terms of design parameters (pressure, temperature, volume, etc). The following safety metric is proposed:

(1)SRHE=PdesignPtmax×100

Where…

SR = Safety Rating

Pdesign = The design pressure of the shell side

P(t)max = The maximum transient shell side pressure that is experienced during a tube rupture

The benefit of the SR metric is that it relates the severity of a tube rupture (in terms of the maximum pressure reached) with the design pressure of the shell. This allows a plant to specify a tolerance/threshold that is acceptable for their facility. An SR score of 67 or above is considered to be adequate for a tube rupture. An SR score of less than 67 may mean that a heat exchanger is inadequately designed for the possibility of overpressure from a tube rupture. Table 3 is a more comprehensive reference on how to interpret an SR score.

Table 3. Interpretation Guide for Safety Rating Scores

Safety RatingInterpretation
100 Maximum transient shell side pressure experienced during tube rupture does not exceed the design pressure of the shell side.
> 67 Maximum transient shell side pressure experienced during tube rupture exceeds the design pressure of the shell side, but does not exceed 1.5 times the design pressure of the shell side.
67 Maximum transient shell side pressure experienced during tube rupture equals 1.5 times the design pressure of the shell side.
< 67 Maximum transient shell side pressure experienced during tube rupture exceeds 1.5 times the design pressure of the shell side.

The addition of the SR metric begs the question: why not instead calculate the risk of a tube rupture? Quantifying the risk for this scenario is a difficult task, primarily because little publicly available data exists on the probability of heat exchanger tube ruptures. This makes risk calculations highly unreliable. The SR metric does however indirectly measure the risk of a tube rupture. This is because the SR metric incorporates the transient pressure in the shell side and compares it to its design pressure (the pressure that the equipment is rated for). For a tube rupture scenario that barely exceeds the shell’s design pressure (interpreted as a high SR metric), minimal impact to the exchanger is expected. However, for a tube rupture that greatly exceeds the shell’s hydrotest pressure (interpreted as a low SR metric), a catastrophic failure may occur.

It should be noted that the SR metric resembles that of the common industry rule-of-thumb known as the “two-thirds rule”. Moreover, Table 3 lists an SR score of 67 as the cutoff point for whether or not an exchanger is able to withstand a tube rupture. The difference between the SR metric and the two-thirds rule is that the SR metric is intended to use the maximum transient shell side pressure while the two-thirds rule only compares the design pressure and hydrotest pressure of the tube and shell sides, respectively. In addition, due to the fact that the SR metric requires the maximum transient shell side pressure, it may incorporate the use of a pressure relief device to increase its score (making the process safer). Thus, for determining the safety rating of an exchanger, it is entirely possible that an exchanger with a tube side pressure of 100 bar and a shell side pressure of 10 bar have an SR score of 100. The way that an exchanger may obtain an SR of 100 can be if it has a pressure relief valve adequately sized to handle the influx of incoming tube side fluid. The exchanger may also have a larger volume, smaller tube size, and many other design variations in order to obtain an SR of 100.

It should also be noted that the two-thirds rule acts as a screening mechanism for determining whether or not an exchanger needs to have a pressure relief valve installed (Hellemans, 2009). In contrast, the SR metric functions as a safety “operating level” or specification that a plant determines it wishes to perform at.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128185971500357

12th International Symposium on Process Systems Engineering and 25th European Symposium on Computer Aided Process Engineering

Adriana Avilés-Martínez, ... Agustín Jaime Castro-Montoya, in Computer Aided Chemical Engineering, 2015

3 Results and Discussion

The stream from the fermentation step enters the column C0 (see Figures 1 and 2) to concentrate the diluted mixture to a composition near the azeotrope composition (D0). The distillate stream is the feed to the azeotropic column C1, where the dehydration step takes place and the top product (D1), containing mostly water, is located in the immiscibility zone of the ethanol-water-n-octane system. The product is condensed and sent to a liquid-liquid separator to obtain an n-octane rich phase (ORG) that is recirculated to the azeotropic column. The aqueous phase is recycled to the column C0 because it still contains ethanol. The bottom C1 product is an n-octane-ethanol mixture that enters the column C2 to recover the entrainer and obtain the final product, an ethanol-n-octane mixture (B2). The results of the design for the columns and mass flows are presented in Tables 1 and 2.

Table 1. Design parameters of columns for the azeotropic distillation scheme

C0C1C2
Number of stages 40 28 12
Feed stages F0-20 A-21 D0-10 SOL-2 ORG-3 B1-6
Design Pressure (N/m2) 101,325 101,325 101,325
Reflux ratio 2 3 0.0002
Heat duty (kw) 4,906.00 1,457.65 1,418.00

Table 2. Mass flowrates for streams

Flows (kg/s)EthanolWatern-octaneT (K)
F0 1.02376 3.60306 - 303.15
D0 1.17127 0.107584 0.00572393 351.15
B0 - 3.60306 - 373.15
SOL - - 0.60284 303.15
D1 0.151802 0.107712 0.17818 343.15
B1 1.02376 - 3.85052 350.15
ORG 0.004286 0.000128 0.172461 343.15
AQ 0.147516 0.107584 0.005724 343.15
D2 1.02376 - 0.60284 350.15
B2 - - 3.24768 399.15

For risk calculations, five catastrophic scenarios are considered. There are two types of mass releases, instantaneous and continuous. In the case of an instantaneous release, the outcomes are boiling liquid expanding vapour explosion (BLEVE), unconfined vapour cloud explosion (UVCE), and flash fire due to instantaneous release (FFI). The other two scenarios correspond to a continuous release, jet fire and flash fire due to continuous release (FFC). The calculations were carried out considering only n-octane within the process. The estimated total distance likely to cause death (DD) was 0.1446371 m/y, which represents the total individual risk of the process considering the extractive and recovery columns. The corresponding risk of the five events can be seen in Figure 3 for columns C1 and C2.

What is the basic formula for risk analysis?

Figure 3. Total distance likely to cause death for the azeotropic distillation scheme.

Table 3 shows the distances of impact obtained for all events. Although BLEVE scenarios have the greatest distances, we can see in Figure 3 that flash fire due to a continuous release represents the worst-case scenario for both columns. This is because FFC has a higher probability of occurrence. The probability of occurrence for FFC is 2.48 × 10−4/y in contrast to the BLEVE probability of occurrence of 5.75 × 10−6/y, a difference of two orders of magnitude.

Table 3. Fatal distances for the different events

Di (m) C1Di (m) C2
BLEVE 2136.08 2182.66
UVCE 657.10 667.99
FF INS. 812.00 826.34
JET FIRE 29.06 29.05
FF CONT 189.46 192.72

As mentioned above, the approach is based on a multi-criteria analysis. Therefore, Table 4 shows the cost of dehydrating the product, with a flow of 800 kmol/h and a composition of 90% mol water. The total purification cost is 0.0752 US$/kg ethanol. Carbon dioxide emissions were estimated assuming crude oil as fuel. The value reported in Table 4 shows CO2 emissions of 0.007639 Kg/s per kmol of ethanol dehydrated.

Table 4. Azeotropic distillation scheme costs for an 80 kmol/h ethanol production

Cost Analysis Result(USD / year)
Equipment 1,116,920.00
Utilities 1,309,450.00
Total Annual Cost 2,426,370.00
CO2 Emissions 2200 Kg/hr

In order to compare the results with the extractive distillation process to dehydrate bioethanol, we considered the works reported by Avilés-Martinez et al. (2012) and Medina-Herrera et al. (2014), in which extractive distillation was used to obtain anhydrous bioethanol. Using the same ethanol production rate as in this work, Medina-Herrera et al. (2014) minimized the total individual risk in the extractive column and in the ethylene-glycol recovery column, and reported a distance likely to cause death of 0.2052 m/y, which is higher than the result obtained here of 0.1446 m/y. In the work by Avilés-Martinez et al. (2012), glycerol was considered as entrainer. Based on the design parameters reported in their work, we simulated their extractive distillation process for the diluted water-ethanol mixture considered here. The results include a higher TAC of 3,148,340 US$/y, equivalent to 0.0996 US$/kg ethanol, and a total heat duty of 0.010931 GJ/kg-ethanol. The CO2 emissions were estimated at 3096.23 kg/h, equivalent to 0.011 Kg/s for every kmol of ethanol purified. Figure 4 summarizes the results obtained for the economic, safety, energy and environmental terms analyzed for the separation schemes; it can be observed that all of these factors favor the use of azeotropic distillation over extractive distillation for the case study considered here.

What is the basic formula for risk analysis?

Figure 4. Comparison of total cost, total distances likely to cause death, energy requirements and CO2 emissions for the azeotropic (P1) and extractive distillation (P2) processes.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444635778501510

Online Identity and User Management Services

Tewfiq El Maliki, Jean-Marc Seigneur, in Managing Information Security (Second Edition), 2014

Identity Management Overview

A model of identity can been as follows [7]:

User who wants to access to a service

Identity Provider (IdP): is the issuer of user identity

Service Provider (SP): is the relay party imposing identity check

Identity (Id): is a set user’s attributes

Personal Authentication Device (PDA): Device holding various identifiers and credentials and could be used for mobility

Figure 4.1 lists the main components of identity management. The relationship between entities, identities and identifiers are shown in Figure 4.2, which illustrates that an entity, such as a user, may have multiple identities, and each identity may consist of multiple attributes that can be unique or non-unique identifiers.

What is the basic formula for risk analysis?

Figure 4.1. Identity management main components.

What is the basic formula for risk analysis?

Figure 4.2. Relationship between identities, identifiers and entity.

Identity management refers to “the process of representing, using, maintaining, deprovisioning and authenticating entities as digital identities in computer networks”.

Authentication is the process of verifying claims about holding specific identities. A failure at this stage will threaten the validity in the entire system. The technology is constantly finding stronger authentication using claims based on:

Something you know: password, PIN

Something you have: one-time-password

Something you are: your voice, face, fingerprint (Biometrics)

Your position

Some combination of the four.

The BT report [3] has highlighted some interesting points to meet the challenges of identity theft and fraud:

Developing risk calculation and assessment methods

Monitoring user behavior to calculate risk

Building trust and value with the user or consumer

Engaging the cooperation of the user or consumer with transparency and without complexity or shifting the liability to consumer

Taking a staged approach to authentication deployment and process challenges, using more advanced technologies

Digital identity should mange three connected vertexes: usability, cost and risk as illustrated in Figure 4.3.

What is the basic formula for risk analysis?

Figure 4.3. Digital identity environment to manage.

The user should be aware of the risk he/she facing if his/her device/software’s security is compromised. The usability is the second aspect that should be guaranty to the user unless he/she will find the system difficult which could be a source of security problem. Indeed, a lot of users when they are flooded by passwords write them down and hide them in a secrete place under their keyboard. Furthermore, the difficulty to deploy and manage a large number of identities discourages the use of identity management system. The cost of a system should be well studied and balanced related to risk and usability. Many systems such as one-Time-Password token are not widely used because they are too costly for a widespread deployment for large institutions. Traditionally identity management was seen as service provider centric as it was designed to fulfill the requirements of service provider, such as cost effectiveness and scalability. The users were neglected in many aspects because they were forced to memorize difficult or too many passwords. Identity management systems are elaborated to deal with the following core facets [8]:

Reducing identity theft: The problem of identity theft is becoming a major one, mainly in the online environment. The providers need more efficient system to tackle this problem.

Management: The amount of digital identities per person will increase, so the users need convenient support to manage these identities and the corresponding authentication.

Reachability: The management of reachability allows user to handle their contacts to prevent misuse of their address (spam) or unsolicited phone calls.

Authenticity: Ensuring authenticity with authentication, integrity and non-repudiation mechanisms can prevent from identity theft.

Anonymity and pseudonymity: providing anonymity prevent from tracking or identifying the users of a service.

Organization personal data management: A quick method to create, modify a delete work accounts is needed, especially in big organizations.

Without improved usability of identity management [8], for example, weak passwords used by users on many Web sites, the number of successful attacks will remain high. To facilitate interacting with unknown entities, simple recognition rather than authentication of a real-world identity has been proposed, which usually involves manual enrollment steps in the real-world [5]. Usability is indeed enhanced, if there is no manual task needed. There might be a weaker level of security but that level may be sufficient for some actions, such as, logging to a mobile game platform. Single Sign-On (SSO) is the name given to the requirements of eliminating multiple password issues and dangerous password. When we use multiple user Id’s and passwords just to use the emails systems and file servers at work, we feel the inconvenience that comes from having multiple identities. The second problem is the scattering of identity data which causes problems for the integration of IT systems. Moreover, it simplifies the end-user experience and enhances security via identity-based access technology.

Microsoft first largest identity management system was Passport Network. It was a very large and widespread Microsoft Internet service to be an identity provider for the MSN and Microsoft properties, and to be an identity provider for the Internet. However, with Passport, Microsoft was suspected by many persons of intending to have an absolute control over the identity information of Internet users and thus exploiting them for its own interests. Passport failed to become the Internet identity management tool. Since then, Microsoft has clearly understood that an identity management solution cannot succeed unless some basic rules are respected [9]. That’s why Microsoft’s Identity Architect, Kim Cameron, has stated the seven laws of identity. His motivation was purely practical in determining the prerequisites of successful identity management system. He formulated the essential principles to maintain privacy and security.

1.

User control and consent over the handling of their data.

2.

Minimal disclosure of data, and for specified purpose.

3.

Information should only be disclosed to people who have a justifiable need for it.

4.

The system must provide identifiers for both bilateral relationships between parties, and for incoming unsolicited communications.

5.

It must support diverse operators and technologies.

6.

It must be perceived as highly reliable and predictable.

7.

There must be a consistent user experience across multiple identity systems and using multiple technologies.

Most systems do not fulfill several of these tests particularly they are deficient in fine-tuning the access control over identity to minimize disclosure of data. The formulated Cameron’s principles are very clear but they are not enough explicit to compare finely identity management systems. That’s why we will define explicitly the identity requirements.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124166882000040

Identity Management

Dr.Jean-Marc Seigneur, Dr.Tewfiq El Maliki, in Computer and Information Security Handbook, 2009

Identity Management Overview

A model of identity7 can been as follows:

A user who wants to access to a service

Identity Provider (IdP), the issuer of user identity

Service Provider (SP), the relay party imposing an identity check

Identity (Id), a set of user attributes

Personal Authentication Device (PAD), which holds various identifiers and credentials and could be used for mobility

Figure 17.1 lists the main components of identity management.

What is the basic formula for risk analysis?

Figure 17.1. Identity management main components.

The relationship between entities, identities and identifiers are shown in Figure 17.2 which illustrates that an entity, such as a user, may have multiple identities, and each identity may consist of multiple attributes that can be unique or non-unique identifiers.

What is the basic formula for risk analysis?

Figure 17.2. Relationship among identities, identifiers, and entity.

Identity management refers to “the process of representing, using, maintaining, deprovisioning and authenticating entities as digital identities in computer networks.”8

Authentication is the process of verifying claims about holding specific identities. A failure at this stage will threaten the validity of the entire system. The technology is constantly finding stronger authentication using claims based on:

Something you know (password, PIN)

Something you have (one-time-password)

Something you are (your voice, face, fingerprint [biometrics])

Your position

Some combination of the four

The BT report9 has highlighted some interesting points to meet the challenges of identity theft and fraud:

Developing risk calculation and assessment methods

Monitoring user behavior to calculate risk

Building trust and value with the user or consumer

Engaging the cooperation of the user or consumer with transparency and without complexity or shifting the liability to the consumer

Taking a staged approach to authentication deployment and process challenges using more advanced technologies

Digital identity should manage three connected vertexes: usability, cost, and risk, as illustrated in Figure 17.3.

What is the basic formula for risk analysis?

Figure 17.3. Digital identity environment to be managed.

The user should be aware of the risk she is facing if her device’s or software’s security is compromised. Usability is the second aspect that should be guaranteed to the user unless he will find the system difficult to use, which could be the source of a security problem. Indeed, many users, when flooded with passwords to remember, write them down and hide them in a “secret” place under their keyboard. Furthermore, the difficulty of deploying and managing a large number of identities discourages the use of an identity management system. The cost of a system should be well studied and balanced related to risk and usability. Many systems such as one-time password tokens are not widely used because they are too costly for widespread deployment in large institutions. Traditionally, identity management was seen as being service provider-centric because it was designed to fulfill the requirements of service providers, such as cost effectiveness and scalability. Users were neglected in many aspects because they were forced to memorize difficult or too many passwords.

Identity management systems are elaborated to deal with the following core facets10:

Reducing identity theft. The problem of identity theft is becoming a major one, mainly in the online environment. Providers need more efficient systems to tackle this issue.

Management. The number of digital identities per person will increase, so users need convenient support to manage these identities and the corresponding authentication.

Reachability. The management of reachability allows a user to handle their contacts to prevent misuse of their email address (spam) or unsolicited phone calls.

Authenticity. Ensuring authenticity with authentication, integrity, and nonrepudiation mechanisms can prevent identity theft.

Anonymity and pseudonymity. Providing anonymity prevents tracking or identifying the users of a service.

Organization personal data management. A quick method to create, modify, or delete work accounts is needed, especially in big organizations.

Without improved usability of identity management11—for example, weak passwords used on many Web sites—the number of successful attacks will remain high. To facilitate interacting with unknown entities, simple recognition rather than authentication of a real-world identity has been proposed, which usually involves manual enrollment steps in the real world.12 Usability is indeed enhanced if no manual task is needed. There might be a weaker level of security, but that level might be sufficient for some actions, such as logging to a mobile game platform. Single Sign-On (SSO) is the name given to the requirements of eliminating multiple-password issues and dangerous passwords. When we use multiple user IDs and passwords just to use email systems and file servers at work, we feel the inconvenience that comes from having multiple identities. The second problem is the scattering of identity data, which causes problems for the integration of IT systems. SSO simplifies the end-user experience and enhances security via identity-based access technology.

Microsoft’s first large identity management system was the Passport Network. It was a very large and widespread Microsoft Internet service, an identity provider for the MSN and Microsoft properties and for the Internet. However, with Passport, Microsoft was suspected by many of intending to have absolute control over the identity information of Internet users and thus exploiting them for its own interests. Passport failed to become “the” Internet identity management tool.

Since then, Microsoft has clearly come to understand that an identity management solution cannot succeed unless some basic rules are respected.13 That’s why Microsoft’s Identity Architect, Kim Cameron, has stated the seven laws of identity. His motivation was purely practical in determining the prerequisites of creating a successful identity management system. He formulated these essential principles to maintain privacy and security;

User control and consent over the handling of their data

Minimal disclosure of data, and for a specified purpose

Information should only be disclosed to people who have a justifiable need for it

The system must provide identifiers for both bilateral relationships between parties and for incoming unsolicited communications

It must support diverse operators and technologies

It must be perceived as highly reliable and predictable

There must be a consistent user experience across multiple identity systems and using multiple technologies.

Most systems do not fulfill several of these tests; they are particularly deficient in fine-tuning the access control over identity to minimize disclosure of data.

Cameron’s principles are very clear but they are not explicit enough to compare identity management systems. That’s why we will explicitly define the identity requirements.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123743541000170

Implementing Adaptive Security

Eric Cole, in Advanced Persistent Threat, 2012

Key Emerging Technologies

This book laid out an approach for effectively dealing with the APT. The methods in this book will scale, providing effectively security today and into the future because they focus in on fixing the problem and not on treating the symptoms. However, the threat will effectively evolve and it is important to not just focus on the current concern, the APT, it is also important to focus on all next generation threats. As organizations continue to focus on effectively dealing with the APT, the APT is not going to go away, it is going to evolve. As the defense gets more effective, the offense will change and adapt. In addition to focusing in on properly protecting data and mitigating risk, it is also important to look out on the horizon and track the emerging trends that are needed to effectively scale security into the future. Some of the key emerging trends that effective organizations are focusing on are:

1.

More focus on data correlation—Instead of adding more devices to a network, perform data correlation across the existing devices first. Networks are becoming so complex that no single device will be able to give enough insight into what is happening across an organization. To better understand both normal and anomalous traffic, data correlation has to be performed across all critical devices. Each device/server has a piece of the puzzle and only by putting all of the pieces together, can organizations understand what is really happening.

2.

Threat intelligence analysis will become more important—Many of the products in the security industry are becoming more commoditized. Many consoles and network devices are very similar in how they work and operate. The key differentiator is having accurate and up-to-date threat data. Organizations cannot fix every single risk. Therefore as the risks grow, more focus has to be put against the real attack vectors. A growing theme is the defense must learn from the offense. Threat must drive the risk calculation so that the proper vulnerabilities can be addressed. Only with proper threat data, can the avenues of exploitation be fixed.

3.

Endpoint security becomes more important—As more and more devices become portable, the importance of the endpoint becomes more critical. In terms of the data it contains, there is little difference between a server and a laptop. A server might have more data but laptops typically still have a significant amount of critical information. However, the server is on a well-protected network and the laptop is usually directed connected to untrusted networks, including wireless. Therefore, we need to move beyond traditional endpoint protection and focus on controlling, monitoring and protecting the data on the end points.

4.

Focusing in on proactive forensics instead of being reactive—Attacks are so damaging that once an attacker gets in it is too late. In addition, with technologies like virtualization and SCADA controllers, performing reactive forensics is more difficult and sometimes not possible. Therefore, more energy and effort needs to be put against proactively identifying problems and avenues of compromise before major impact is caused to an organization. With the amount of intellectual property that is being stolen and the reputational damage, proactive is the only way to go.

5.

Moving beyond signature detection—Signature detection works because the malicious code did not change and it took a while for large-scale exploitation to occur. While signature detection is still effective at catching some attacks, it does not scale to the advanced persistent threat (APT) that continues to occur. Therefore, signature detection must be coupled with behavioral analysis to effectively prevent and detect the emerging threats that will continue to occur. Since the new threats are always changing and persistent, only behavior analysis has a chance of being able to deal with the malicious attacks in an effective way.

6.

Users will continue to be the target of attack—Everyone likes to focus on the technical nature of recent attacks, but when you perform root cause analysis the entry point with most of these sophisticated APT attacks is a user opening an attachment, clicking on a link, or performing some action they are not supposed to. After an initial control point is gained on the private networks, the attacks became very sophisticated and advanced but the entry point with many attacks is traditional social engineering. Advanced spear phishing attacks will trick the user into performing some action they are not supposed to. While you will never get 100% compliance from employees, organizations need to put energy against it because they will get short and long term benefit.

7.

Shifting from focusing on data encryption to key management—Crypto is the solution of choice for many organizations, however they fail to realize that crypto does not do any good, if the keys are not properly managed and protected. Crypto has quickly become painkiller security because organizations are focused on the algorithms and not the keys. The most robust algorithms in the world are not any good without proper management of the keys. Most data that is stolen is from encrypted databases because the keys are stored directly with the encrypted data.

8.

Cloud computing will continue regardless of the security concerns—Even though there are numerous concerns and security issues with cloud, you cannot argue with free. As companies continue to watch the bottom line, more companies are wondering why they are in the data center business. By moving to both public and private clouds can lower costs and overhead; however as with most items, security will not be considered until after there are major problems. Attackers will always focus on high payoff targets. As more companies move to the cloud, the attack methods and vectors will also increase at an exponential rate including an APT focused on the cloud.

9.

New Internet protocols with increased exposure—As the Internet continues to grow and be used for everything, new protocols will continue to emerge. The problem is the traditional model of deploying new protocols, no longer works. In the past, a new protocol was developed and would take a long term to achieve mainstream usage. This allowed the problems to be worked out and security to be properly implemented. Today when a new protocol comes out it is used so quickly, the problems are only identified after there is wide spread use, which quickly leads to wide spread attacks.

10.

Integrated/embedded security devices—Not only is technology becoming integrated into almost every component, more functionality is being moved to the hardware level. Beyond the obvious implication of having more targets to go over, embedded devices create a bigger problem. It is much harder to patch hardware than it is software. If software has a problem, you can run a patch. If hardware has a vulnerability, it will take longer to fix and increase the attack surface. Smart grid is a good example of items 9 and 10 combined together.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597499491000127

What is the basic risk formula?

Risk is the combination of the probability of an event and its consequence. In general, this can be explained as: Risk = Likelihood × Impact.

How is risk analysis calculated?

How to perform a risk analysis.
Identify the risks. Make a list of potential risks that you could encounter as a result of the course of action you are considering. ... .
Define levels of uncertainty. ... .
Estimate the impact of uncertainty. ... .
Complete the risk analysis model. ... .
Analyze the results. ... .
Implement the solution..

What are the 3 steps of risk analysis?

Risk assessment is the name for the three-part process that includes: Risk identification. Risk analysis. Risk evaluation.

What is the calculation of risk?

Risk can be defined as the combination of the probability of an event occurring and the consequences if that event does occur. This gives us a simple formula to measure the level of risk in any situation. Risk = Likelihood x Severity.