ICE Use & Interpretation
How to use and interpret ICE scores and rankings
How to use and interpret ICE scores and rankings
Our search engine allows you to search the entire database of case studies using filters and keywords, weighted by importance. You see choices for free text and coded searches. The free text search looks at the case abstracts only.
Searching on a single or multiple categories or characteristics can show a list of cases that are relevant to the issue.
New Case Input Instructions: When a new case arises, submit the attributes for the case using the “coded” choice search. You don’t need to fill out every category. They will be treated as “wild cards”. The more attributes you provide will give better results but searches on differing numbers of fields are possible. These values may become clear as the event evolves, so this may require an interative process. (See ICE Fields and attributes.) You may return several times to use the expert system as the event evolves and more attributes are known or change with time.
Weighting Instructions: You can weight certain attributes for a match of rnaks that can be tailored towards a particular situation. With a multiplier of 2, matches in the selected category will be weighted twice as heavily in your results. A multiplier of 3 will weigh responses three times as heavily, and so on… A weight of 1 is the default selection – if all variables are weighted “1”, matches will have equal weight in determining the results. Think about the use of weightings which can range from1-10 extra “points” for searching for matches. There are ten categories in ICE so giving a weighting of 10 for any category will be predispose the results inordinately (in equal terms) towards that choice. In general, a weighting of 1-3 has a slight impact, 4-6 a moderate imapct, and 7-10 a high impact on teh final results and scoring.
You can also display scores and weight the cases by ranking of their relevance (based on historical, geographic and situational criteria) and the favorability of the outcome (avoidance of loss, low casualties and a short duration).
There must be a more nuanced system that can take this mass of research embodied in case studies and turn them into ranked choices for decision makers in dealing with crises. This system must be based on a relative scoring system that has two further dimensions. Given the creation of the basic PMS system, which provides the Historical Match Scores (H scores), it is also necessary to view the matches in terms of the analytic relevance and the usefulness to the case for which a decision is required.
The base scoring for the pattern matches is the degree to which the prospective case matches the attributes of some prior cases. This is the heart of the ICE PMS, in the selecting out of the ICE database those case attributes of relevance and assembling them into a coherent profile. The cases are broken down into “tiers” of matches.
In ICE there are 16 categories in total, and 12 specific attributes from them are included here. The number of case matches divided by the total number of possible category choices gives a relative historical case match score (H score). Attributes can be specific, or unspecific (”any”), depending on the precision in defining the scenario parameters. Dividing the match cases by the total possible derives a historical match percent value for the H score. Appendix A shows the attribute choices in ICE, all of which are delimited except “country”. Table VI shows the 12 potential attributes for input in describing a new or possible case.
Table V1
The Basic Input Attribute Template for Scenario Creation
Category Attribute Weight
1.Continent 1
2. Region 1
3. Country 1
4. Habitat 1
5. Environ Problem 1
6. Scope 1
7. Trigger 1
8. Type 1
9. Outcome 1
10. Conflict Level 1
11. Time Period 1
12. Duration 1
There are two other ways to adjust the data and thus refine the scoring for specific outcomes. The second scoring mechanism is the case relevance (R score), which provides a differing adjustment to the H score. The adjustments are measurable and are based on select categories in ICE. There are three measures of relevance, which together reveal a total score and relevant pattern match percent. In the scoring, all choices are rated on a scale of one to three, with three being the more desired. Summation of the three sub scores gives an overall score that is then adjusted according to the total number of possible R score points (9). This “relevance score”, which ranges from 0 to 100 percent, is now, near and native in orientation. The higher score is the one preferred.
Table 2 shows the historic relevance scoring system, with more recent cases awarded more relevant than those more distant in time. This does not mean that the historic cases should be avoided, but rather taken in a historic context. The source of the indicator from ICE is the “Duration” category (Attribute #3).
Table 2
Historical Relevance Parameters
Attribute Score Definition
Ancient = 1 = Before 0 A.D.
Middle Ages = 2 = 1 A.D. to 1800 A.D.
Modern = 3 = 1801 A.D. to today
Geographic relevance, like conflict, rewards those locales nearest to the case and penalizes those furthest away. Generally, conflicts within the same region are given a high score (3), while conflicts in the same or adjoining region are given a medium score, and unrelated continents a low score (see Table 3).
Table 3
Geographic Relevance Parameters
Attribute Score Definition
Continent = 1 = Continent
Region = 2 = Region
Country = 3 = Country
Source: ICE category 4, Geography
The scope of the case is relevant to the specific objectives of the case and the degree to which they fit. The scope of the case can range from the global issues to those that are subnational in nature (see Table 4).
Table 4
Scope Relevance
Attribute Score Definition
Multilateral = 1 = Multi regional interests
Regional = 2 = Regional interests
Unilateral = 3 = Country and civil interests
Source: ICE category 13, Scope
The second additional scoring mechanism is the decision outcome and its importance (O score), which provides focus on important case aspects for choosing goals. The aspects are again measurable and are categories in ICE. There are three measures of relevance, which together reveal a total score and pattern match percent. In the scoring, all choices are rated on a scale of one to three, with three being the more desired. In general, it is thought that conflict should end quickly with few casualties, and constitute a victory.
What happens in a case is clearly important for the decision maker facing a crisis at hand. In the matched case, was there a clear loss or victory, or does the situation turn into a stalemate? Naturally, victory is favored over loss (see Table 5).
Table 5
Decision Outcome Score
Attribute Score Definition
Stalemate or loss = 1 = No win, long period
In Progress = 2 = Ongoing
Victory = 3 = Victory or Loss
Source: ICE category 3, Duration
No decision maker would want more casualties from conflict than is necessary, at least in today’s “civilized” world of international relations. The gradients of conflict of ordered on a logarithmic basis (see Table 6).
Table 6
Fatality Importance
Attribute Score Definition
High = 1 = More than 100,000
Medium = 2 = 1,000 to 100,000
Low = 3 = Less than 1,000
Source: ICE category 10, Conflict Level
Decision makers dream of quick victories with few fatalities (especially when their side is concerned). The length of a conflict has an impact on the public and its attendant level of support. The ratings are based on political election cycles, which usually last about 4 years (using the United States as an example) and are shown in Table 7.
Table 7
Duration Importance
Attribute Score Definition
Long = 1 = More than10 years
Medium = 2 = 4 to 10 years
Short = 3 = Less than 3 years
Source: ICE category 10, Conflict Level
The adjusted H score is then also adjusted by the decision score (D score). Again, this sums the number of scoring for each of the three sub categories and divides by the total number of possible D Score points. This gives a score that is specific to the purposes of the decision maker in the case and may differ for other decision makers in other places in other times.
In the end, the decision maker now has a ranking of analysis and reliability that can point to some weighting of case examples to follow or avoid. This approach is only the formalization of an informal process that already occurs. However, this knowledge is almost tradition, in the wisdom and recall of some key decision makers over time and their accumulation of knowledge. Unfortunately, they move in and out of government. This tool would be a lasting edifice.
With this analytical scoring system in mind, the next section shows it in use. As opposed to the earlier effort, which looked at historical case, this section posits six new possible cases that might occur in the next 10 to 25 years. The purpose of the cases is to suggest new conflict areas and trends, but also to test the use of the expert system in providing useful information. The six scenario cases are chosen to reflect the six topic areas and types that have been the structure of interest in conflict and environment cases over time.