2. Framework for Conducting Legal Needs Surveys

This Chapter establishes a methodological and conceptual framework for the conduct of legal needs surveys, and offers illustrative taxonomies of legal problems, sources of help and dispute resolution processes; with multiple levels of detail. This chapter also addresses how surveys can be used to measure legal needs.


2.1. Legal needs surveys in context

Legal needs surveys constitute a specialised form of survey research. They should therefore adhere as far as is possible to good practice in the survey research field. Their conduct should involve consideration of the relative benefits of different modes of survey administration, determination of the most appropriate sample frame and sampling method, maximisation of validity and reliability in questionnaire design, review and testing of design elements ahead of full implementation, conduct of suitable forms of analysis, and timely and faithful reporting.

General guidance on survey research, and more particular guidance on its conduct in developing and transitional countries and in the justice sector, is readily available.1 This chapter complements this existing guidance by identifying and expanding upon a variety of considerations that are unique to legal needs surveys.

Legal needs surveys give rise to unique considerations because they are situated within a unique conceptual framework, at the heart of which is the concept of the “justiciable” problem. Legal needs surveys must therefore be framed appropriately and justiciable problems appropriately defined. There are also unique considerations concerning survey scope, units of measurement, and the operationalisation of other distinct concepts relating to justiciable problem experience, such as forms and sources of help, dispute resolution processes, outcomes, legal capability and legal need.

This chapter examines the concepts, framing, scope and units of measurement used in legal needs surveys and provides taxonomies of justiciable problems, sources of help and dispute resolution processes. These taxonomies can assist in the process of defining the subject matters of legal needs surveys, as well as supporting greater consistency and opportunity for comparison between surveys. This chapter also discusses how legal needs survey questionnaires are best structured.

2.2. “Justiciable” problems

The term “justiciable” has been used intermittently since the 15th Century to indicate an issue within the jurisdiction of a court of law, or one liable to be taken to court.2 However, in the context of legal needs surveys, the term has acquired a more particular meaning (coined in the reporting of the seminal Paths to Justice survey [Genn, 1999, p. 12]), arising from the recognition that beyond problems that become “legal” – through use of traditional legal services or processes, or simply through consideration in legal terms – there are many for which law provides a framework, and in which law could potentially be invoked, but for which no (explicit, at least) consideration is given to law (often appropriately and without cause for concern). Accordingly, throughout this document, the term “justiciable” is used to describe problems that raise legal issues, whether or not this is recognised by those facing them, and whether or not lawyers or legal processes are invoked in any action taken to deal with them.

2.2.1. Framing legal needs surveys: Defining the subject matter

The presentation of research and the formulation of questions can have a substantial impact on the nature of responses. This is called the “framing effect”, where framing refers to “the process by which people develop a particular conceptualisation of an issue or reorient their thinking about an issue” (Genn, 1999, p. 12). Framing is therefore a key consideration in designing a survey.

Thus, a primary consideration in designing a legal needs survey is how to communicate to respondents the subject matter of the survey. The concept of a justiciable problem, as defined above, is not commonly used or understood. It is also well documented that public understanding of law is generally low3 and that legal concepts tend to be narrowly conceived (often with an emphasis on criminal justice).4

Framing legal needs surveys around the concept of “justiciable” problems is therefore problematic, but so too are references to “legal” problems or even to “law” in general. Such references risk constraining survey responses to problems that have involved the use of legal services or those commonly conceived of as being legal. Thus, around half of the large-scale national legal needs surveys detailed in Table ‎1.1 have eschewed references to law in their framing.5 For example, the 2016 Mongolian survey was introduced as being about “problems that citizens commonly face.”

The risk of references to ‘law’ was made evident by an experiment in which a survey was randomly presented to individuals as being about either ‘different kinds of legal problems or disputes’ or, less technically, about ‘different kinds of problems or disputes’. It was found that a single use of the word ‘legal’ when introducing the survey led to a substantial decrease in the likelihood of problems being reported:

“Framing problems as ‘legal’ […] was associated with a significant reduction in problem prevalence when compared to introducing problems without any reference to them being legal … Where problems were introduced as legal, 50.8% reported one or more problem, with this rising to 62.6% where they were not.”

Furthermore, responses may be influenced simply by revealing the name of the survey sponsor if the name is legal in nature (Pleasence et al., 2013).

Good practice in the conduct of legal needs surveys avoids references to law, justiciable problems or other technical concepts, and instead introduces surveys by describing the subject matter in purely lay terms. For example, surveys can begin with generic reference to “common” and/or “everyday” problems, perhaps accompanied by examples (e.g. land grabbing, unfair dismissal by an employer, being injured as a result of someone else’s mistake, or being involved in a dispute over money in a divorce settlement). This has the benefit of focusing attention on to the types of issues that are justiciable, but using only lay language. As legal needs surveys are often conducted in different languages, a further benefit of using lay language is that there is less need to translate technical legal terms or the concept of justiciable problems, for which no direct equivalents exist.6

The same considerations apply in relation to the drafting of questions seeking to identify justiciable problems. Around two-thirds of the national legal needs surveys detailed in Table ‎1.1 have sought to exclude references to law in the problem identification process, along with the use of legal terminology.7 Following this practice, justiciable problems detailed in questionnaires should be carefully described in lay terms to ensure that descriptions are both recognisable and justiciable. This requires both expert methodological and expert legal input.8

As the report of the 2016 Argentinian survey explained:

“Instead of referring to problems in legal terms, the circumstances of problems were presented: it was asked whether [respondents] had experienced a problem or situation of a certain kind, in colloquial and descriptive terms, rather than using legal language. This approach reduces underreporting of problems, including problems experienced by people who are not aware they have legal consequences.” (Subsecretaría de Acceso a la Justicia (Ministerio de Justicia y Derechos Humanos, 2016, p. 14)

2.2.2. The range of justiciable problems

The range of justiciable problems that could be studied is almost as broad as the range of people’s activities. While various categories of justiciable problems have been explored in most legal needs surveys, there has been significant variation in the mix of problem categories and types presented. All of the surveys in Table ‎1.1 and Table ‎1.2, for which details are available, captured data concerning family, employment and housing related problems; problems central to people’s lives and welfare all around the globe.9 Almost all also collected data concerning consumer and money related problems, and the majority collected data concerning problems relating to discrimination, education, injuries (due to negligence), neighbours, treatment by the police and government services (particularly welfare provision).10

Questions about certain problem types are more frequently posed in particular jurisdictions. Historically, for example, problems concerning identity documentation and land (as distinct from housing) have typically been explored in development contexts.11 In contrast, questions concerning immigration are usually addressed to people living in high income jurisdictions; the same is true of problems concerning the care of others. Some questions have been asked in relatively few jurisdictions, such as questions about defamation, or - in very recent surveys - online harassment (included in the 2017 Sierra Leonean survey).

A number of surveys have included questions about “other” problems once questions about specific categories have been answered. This is not good practice, however, as it introduces uncertainty and ambiguity as to the nature and scope of problems under study. Problems should be defined as clearly as possible. An unspecified “other” category places the onus on respondents to determine the sort of things an “other” category might contain. This leads to frames of reference varying significantly among respondents (e.g. depending on their experience and interpretation of previous questions). It also affects the number of non-justiciable problems reported.

The great amount of time involved in identifying justiciable problems means that there is always pressure to limit the range of problems included in legal needs surveys. Ultimately the range of problems included in legal needs surveys must reflect the concerns and interests of stakeholders and the technical limitations of such surveys. These vary between surveys.

Certain criteria can help determine the range of problems included in surveys. One will be problem prevalence, which links – as was discussed in Chapter 1 – to the efficiency of legal needs surveys as a research tool. Problem prevalence also constitutes one of Barendrecht et al.’s six approaches to determining “priorities in the justice system” (Barendrecht et al., 2008). However, other criteria are likely to be equally or even more influential in the minds of survey sponsors. These include the value, impact and cost of problems (both to those affected and more widely; particularly to public services) – alluded to in Barendrecht et al.’s approaches concerning “severity” and “needs for protection” – and Barendrecht et al.’s other approaches, relating to the costs of self-protection, the cost of leaving a situation, and supply side features, such as specialised courts.

If legal needs survey type questions are incorporated into other types of surveys, then it may be appropriate to ask about only a very limited range of problem types. These might be those directly linked to the subject matter of the host survey (e.g. housing related problems) or, in the case of general surveys, to subjects of particular interest or concern to stakeholders.

Differences in the types of problems and the manner in which they are queried limit the extent to which findings in different jurisdictions can be compared. If comparison is important, particular care should be taken to use consistent problem identification questions, addressing equivalent problem types.

In the absence of any broad international standards for the conduct of legal needs surveys, efforts to achieve consistency among surveys have hitherto been possible only on a partial and ad hoc basis. Drawing on a comprehensive review of past surveys, Table ‎2.1 therefore sets out a taxonomy of all justiciable problems included in past national legal needs surveys. Eight primary problem categories are central: employment, family, accidental injury/illness, public services and administration, money and debt, consumer, community and natural resources, and land and housing. The use and evolution of such a taxonomy in constructing and analysing legal needs surveys would greatly increase the scope for comparison among future surveys’ findings.

2.2.3. Levels of detail

Questions aimed at identifying justiciable problems lie at the heart of legal needs surveys. However, such questions can be – and have been – delivered in many ways. For example, the 2005 Northern Irish survey presented respondents with 110 distinct problem types, while the 2017 Indian survey simply asked respondents whether they “had a dispute in the past 5 years” and followed-up by asking what disputes were about.12 The most common approach is to present respondents with descriptions of a broad range of problems. Indeed, around half of surveys for which details are available ask questions about 70 or more distinct problems; often – in the case of face-to-face surveys – by presenting them on show-cards.13 The presentation of distinct problem categories has both advantages and disadvantages.

On the positive side, it increases clarity of purpose, meaning and scope. The more detail provided, the less respondents are left to interpret the scope of questions, and the lower the risk of misinterpretation and/or of relevant memories being neglected.14 Neglecting areas of memory will reduce problem reporting, affecting accuracy and potentially limiting the prospects of statistical analysis. In the specific context of legal needs surveys, experimental evidence indicates that more detailed questions result in increased reporting of problems, although the experiments were not conclusive, and the impact of varying levels of detail differed depending on the problem type (Pleasence et al., 2016).

An example of misinterpretation resulting from lack of detail is provided by the 2006-2009 English and Welsh Civil and Social Justice Survey, in which “discrimination” was often misunderstood as referring to insensitive or unpleasant behaviour, rather than to prejudicial treatment in relation to opportunities and access, as was intended. As a result, later surveys in the series asked about discrimination in the context of other defined problems (Pleasence et al., 2011a). No “discrimination” problem category appears in Table ‎2.1, reflecting the fact that discrimination (and, separately, harassment) occurs in the context of many of the problems set out in Table ‎2.1. Discrimination can be easily incorporated into surveys by adopting the innovation of the English and Welsh Civil and Social Justice Survey; namely, by asking about discrimination as a characteristic of the problems set out in Table ‎2.1.15

On the negative side, presenting respondents with numerous problems increases the risk of fatigue and satisficing behaviour.16 It is also time consuming, making it impractical for shorter surveys or for surveys where time needs to be made available for other questions. Thus, as a compromise between accuracy and brevity, a number of surveys have approached problem identification by presenting respondents with relatively short lists of problem categories, providing examples to increase clarity and assist recall. Similarly, the 2017-2018 pilot of the South African Governance, Public Safety and Justice Survey presented respondents with a short list of disputant types (including family/relatives/friends, neighbours, other individuals, community/civic groups, employers, company or business officials, health/education institutions or officials, and government institutions or officials), with some examples to aid recall.17

Given the centrality of problem identification, further experimental investigation into the impact of different approaches to problem identification on survey findings would be invaluable.

2.2.4. Exclusion of “trivial” problems

To avoid “being swamped with trivial matters” (Genn, 1999, p. 14), more than half of the national legal needs surveys detailed in Table ‎1.118 have only asked respondents about “difficult to solve" problems (e.g. following the lead of the highly influential Paths to Justice surveys19). This approach does reduce problem reporting to a significant extent, but such an approach is problematic and not recommended.

Experimental evidence shows that asking about only “difficult to solve” problems can decrease problem reporting by around 30% (Pleasence et al., 2016). This hardly constitutes preventing “the floodgates opening” and is hugely problematic in conceptual terms; as it conflates problem occurrence, problem resolving strategy and legal capability. For example, what is difficult to solve for one person may not be for another. As was noted by the authors of the 2008 Australian survey, some problems will therefore not be captured by surveys using the “difficult to solve” filter, simply “because they were easy to handle” (Coumeralos et al., 2012, p. 11), and this ease of handling may have been a product of an individual’s greater legal capability (which is important to understand in relation to need) and/or the availability of more effective problem resolution mechanisms or services (which is also important to understand, and goes to the heart of access to justice policy). Even a problem leading to legal advice or a formal court process could potentially be viewed as “easy to solve” and not reported (e.g. as a result of delegating difficult decisions, good advice and representation). A far better approach to filtering out “trivial” problems is therefore to only follow-up problems that reach a minimum seriousness threshold, after all problems have been identified. This approach is explored further, below.

Table ‎2.1. Illustrative standard problem categories for legal needs surveys

Problem category

Primary sub-categories

Secondary sub-categories

Tertiary sub-categories

Quaternary sub-categories


Application and promotion

Disciplinary procedures


Unfair dismissal


Rights at work

Pay, pension, etc. related

Working conditions

Other rights (e.g. hours, leave, etc.)

Maternity/paternity related


Contract changes



Relationships and care of children


Relationship breakdown

Divorce or separation (binary)

Alimony/division of property

Other relationship problems


Care of children

Child support, custody and contact

Public law children

Adoption and guardianship

Child neglect


Other children problems

Domestic violence (victim)

Wills and probate

Accidental injury / illness


Industrial disease

Workplace accident

Traffic related


Victim of crime


Clinical negligence

Public services and administration


Other quality of medical care

Compulsory treatment/discharge

Access to health services

Disease control

Abuse by state officials

Unfair treatment by the police

Abuse by other state officials

Education (respondent)

Fairness of assessment


Access to education

Access to public services (excl. health)

Citizenship, ID and certification


Obtaining ID

Other state registration/certification

Money and government

Government payments, loans and allowances

Social safety net payments/loans

Excluding pensions

State pension

Disability payments/loans

Educational payments/loans

Other payments/loans

Social safety net tax allowances


Money and debt


Problems paying bills/repaying loans

(excluding creditor action)

Creditors taking action

Legal action (or threat of)

Harassment / intimidation

Loss of collateral (excluding land)

Pawnshop related

Other (excluding land)


Problems recovering money owed

(loans and insurance)

Problem collecting money owed to you (loans (category excludes tenants))

Problem obtaining insurance pay-out

Financial services

Inaccurate credit rating

Problems concerning banks, financial advisors, etc.

Misrepresentation / mis-selling

Mismanagement (financial loss)

Incorrect / disputed fees


Services (excluding utilities)

Other services

(excluding utilities)

Other professional services (e.g. accountants, mechanics, plumbers, etc.)

Other services (e.g. transport services, leisure services, etc.)

Non-delivery or inadequacy of service

Incorrect / disputed fees


Non-delivery of goods

Defective / unsafe goods


Incorrect / disputed billing

Non-delivery of contracted quality

Access to utilities

Community and natural


Access to natural resources




Other utilities

Access to forest, waterways, etc.

Access (e.g. rights of way)

Hunting, fishing, foraging


Maintenance and protection



Environmental damage

Governance of community groups

Land and housing


Land grabbing, expropriation

Use of

Subsistence farming


Building, conveyancing and boundaries

Building permissions, permits




Home ownership

Neighbours (excl. anti-social)




Neighbours (excl. anti-social)

Strata related


Condition of housing

Terms of lease





Neighbours (anti-social)

Problems as a landlord

Rent related

Property damage



Care (excl. children)

Residential care

Care of adults

Environmental (other)

Development project related

Internet related

Abuse, harassment, bullying







Regulation, permits, etc.

Employment (of others)

Land, business premises, etc.

Use/expropriation of land

Acquisition, development, sale, etc.

Rented business premises


Business structure


Corruption, bribes, protection



Victim of crime

Violence (excl. domestic violence)



Theft, burglary, dishonesty

Accusation / Offending

Arrested / detained

Fines / outstanding fines


Road traffic

Other offending

2.2.5. Business related problems

As with individuals acting in a private capacity, businesses – both individuals acting in a business capacity and distinct organisations – face justiciable problems, which impact on business functioning and sustainability, the wellbeing of workers, and the wider economy.20 As noted in ‎Chapter 1. , a number of dedicated legal needs surveys of businesses have been undertaken in recent years. Moreover, around one-third of past surveys of individuals have incorporated questions about problems faced by respondents in a business, as well as in a personal, capacity. For many self-employed persons, the two capacities can feel, and be, difficult to distinguish.21 This raises the significance of business-related problems in many people’s lives. It is particularly relevant in countries in which a significant proportion of the population work on their “own account”.22 In general, in low income countries around half of all workers work on their own account.23

The significance of business-related problems in people’s lives poses methodological challenges to ensuring that the nature of the universe of problems recorded by surveys is distinct and defined.

If business problems are in a general population legal needs survey’s scope, then – as business capacity is a distinct functional capacity and can have a distinct legal capacity, and as problems related to running a business generally have distinctive characteristics, -questions should ideally be formulated to distinguish business and personal problems and enable samples of “individuals” and “businesses” to be separately identified. However, clear distinctions will not always be straightforward, or even possible, owing to the capacity-ambiguity inherent in some aspects of working on one’s own account (e.g. relating to finance where there is no separate business entity in law). The most coherent approach may therefore be to draw a distinction between problems that are solely an aspect of respondents’ work and problems that are at least to some extent personal.

In considering how business and personal problems can be distinguished, three basic approaches have been adopted in the past. The first has been to exclude all business-related problems from the scope of surveys. This was the approach of the Paths to Justice surveys of the late 1990s, which made it clear that problems experienced in a business capacity should not be reported. The second approach has been to ask questions about business related problems separately (as in, for example, the 2008 Australian LAW survey). If this approach is adopted, it should be made clear that questions about personal problems exclude problems experienced in a business capacity (although this has rarely been the case). The third approach has been to ask, on follow-up after selection for detailed questioning, whether problems relate to business activities (as in, for example, the 2017 iteration of the World Justice Project’s General Population Poll and the 2017-18 Nepalese survey).24 Provided that problems selected for follow-up are randomly selected from those identified, reasonable estimates of the total incidence of personal and business related problems may be possible – provided sufficient problems are followed up – but a limitation of this approach is that data are not available across the full sample of identified problems.25

Distinguishing business from personal problems provides conceptual coherence, provides greater flexibility in relation to analysis and facilitates the comparison of survey findings. However, in some – particularly low income – countries and social contexts, it may be difficult to draw the distinction and perhaps, in any event, valuable to include problems experienced by individuals working on their own account. A case in point is the 2017-18 Nepalese survey, in which respondents were asked to include only problems that they had faced themselves, but this was defined to include “problems experienced through a business that provides you with self-employment (but not an enterprise providing employment to others).”

2.2.6. Crime victimisation and offending

While not a primary focus of legal needs surveys, the majority of past legal needs surveys have asked about one or more aspects of respondents’ experience of crime. This is in addition to questions concerning civil dimensions of criminal behaviours (such as domestic violence, corruption, crime compensation, etc.). As with business related problems, the experience of crime is conceptually quite distinct from the experience of justiciable problems. However, there are commonalities between the criminal and civil justice system and associated sources of help. Furthermore, strong associations have been found between the social patterning of crime victimisation, offending and justiciable problem experience.26

Aside from illuminating access to criminal justice issues, the primary benefit of including questions on crime in legal needs surveys is that it allows the investigation of overlapping service needs. If there is little survey sponsor interest in such matters, then questions concerning crime can be limited or excluded, as appropriate. This is particularly so in jurisdictions in which separate victimisation surveys are conducted. Currently, Statistics South Africa is exploring ways to gather victimisation data and legal needs data in rotating years to provide a more comprehensive picture of access to justice.27

For reference, detailed guidance on the conduct of victimisation surveys has been produced elsewhere, including by the United Nations Office on Drugs and Crime and the United Nations Economic Commission for Europe.28

2.3. Problem seriousness

Justiciable problems vary in their seriousness; a matter relevant to prioritisation and action, both by individuals and public services. Accordingly, the great majority of surveys detailed in Table ‎1.1 and Table ‎1.2, for which details are available,29 have investigated relative seriousness in some way. However, conceptualisations of seriousness diverge.

Past surveys have variously sought to measure seriousness with reference to perceptions, economic value and impact; all of which are connected, but distinct. Problems perceived as serious do not necessarily concern matters of high value and are not necessarily impactful; although they will often have substantial perceived value and potential impact.

The 2011 Moldovan and 2016 Mongolian surveys, for example, asked (respectively) about perceived problem “importance” and “seriousness” in the abstract; while the 2012 Georgian survey, for example, asked about the perceived importance of resolving problems. The 2011 Taiwanese survey, which asked about both seriousness and importance of resolution, identified significant differences in reporting patterns between perceptions of seriousness in the abstract and of the importance of resolving problems (Chen et al., 2012b). Turning to economic value, the 2012 Macedonian and Taiwanese surveys asked about the monetary value of matters in dispute; while the 2006 English and Welsh survey adopted the approach of contingent valuation in asking about willingness to pay for problems to be resolved. As for impact, the 2004 Dutch survey, for example, asked about the extent to which respondents became preoccupied with problems; and the 2012 Tajik survey asked about the impact of problems on life in general. Increasingly, surveys are also following the lead of the English and Welsh Civil and Social Justice Survey and investigating specific life-impacts (in various degrees of detail).

2.3.1. The broader impact of justiciable problems

Questions about the impact of justiciable problems on a wide range of life circumstances are important if there is interest in justiciable problems’ broader social and economic impact. This line of inquiry enables policymakers to connect legal problems to broader social and economic development policies and outcomes.

The principal life-impact question developed for the 2004 English and Welsh Civil and Social Justice Survey included nine impact areas: physical health, stress, relationships, violence (aimed at the respondent), property damage, the need to move home, loss of employment, loss of income, and loss of confidence. The form of the question has been widely adopted, with most surveys asking about a similar range of impacts.

However, some surveys have looked at a broader range of impacts. The 2010 English and Welsh Civil and Social Justice Panel Survey and 2012 Georgian survey both covered 18 impact areas, with questions extending to alcohol and drug abuse, fear, and problems related to documentation. The 2017-8 Nepalese survey went further still. It asked about more than 50 impact areas, tailored to particular problem types. These extended to stigmatisation, denial of public and community services, and problems concerning documentation. However, there may be limited value to such detailed data if there are relatively few problems of relevant types or instances of particular impacts.

A small number of surveys have included follow-up questions designed to obtain further details of life changes, upon which estimates of the economic impact of problems on individuals and public services were based. For example, the 2004 survey included 23 follow-up questions, to establish, for example, the extent of lost income, receipt of state support (e.g. as a consequence of unemployment) and use of public services (e.g. medical services). On the basis of responses to these questions, the economic cost of justiciable problems to individuals and public services was estimated to be around US$5 billion per year (Pleasence, 2006, p. i). In the most recent exercise of this type, the 2014 Canadian survey incorporated 30 detailed follow-up impact questions. Here, the annual cost to public services was estimated to be “approximately $800 million (and perhaps significantly more)” (Farrow et al., 2016 p. 16).

Building on life-impact questions, some surveys – following the lead of the 2006 New Zealand survey – have also similarly asked about factors leading to the experience of reported justiciable problems (e.g. the 2016 Argentinian survey).

2.3.2. Abstract problem seriousness scales

A variety of abstract problem seriousness scale questions have been used in legal needs surveys in recent years.

Following a major methodological review, the 2010 English and Welsh Civil and Social Justice Panel Survey introduced a simple visual analogue scale (VAS)30 (a technique used extensively in medicine31) to assess problem seriousness. This involved presenting respondents with a single straight line with “anchor points” marked near the top and bottom (comprising short descriptions of a very serious and relatively trivial problem32), and asking respondents to indicate where a problem sat on the line. The practice has since been adopted by other surveys, including the 2017 Sierra Leonean survey, and has been adapted into an equivalent numerical rating scale (NRS) in the case of, for example, the 2014 English and Welsh survey (conducted via telephone) and the World Justice Project’s 2017 General Population Poll (conducted via various channels). Simple NRS based questions have also been used in other surveys, such as the 2016 Argentinian and Mongolian surveys.

The psychometric properties of such VRS and NRS approaches have not been tested, and they are unlikely to be as reliable as a fully developed multiple item scale.33 However, a VAS can provide a simple, quick and adequate measure,34 as can an NRS, which has the additional advantage of being capable of oral administration. Indeed, in other fields NRS approaches have demonstrated excellent test-retest reliability;35 along with good sensitivity, without the practical difficulties of a VAS (Williamson and Hoggart (2005).36

2.4. Units of measurement

Contrasting with the “largely isolated” (Pleasence et al., 2016, p. 70) tradition of (largely sub-national) U.S. legal needs surveys that ask questions about household experience of justiciable problems, more than three-quarters of the national legal needs surveys detailed Table ‎1.1 asked about individual problem experience.37

Just five surveys ask about household experience. These include the access to justice module of the World Justice Project’s 2016 General Population Poll; although the 2017 General Population Poll was then changed to ask about individual experience. In addition, a small number of surveys asked about the experience of respondents and their partners (the 2004, 2006 and 2008 Canadian surveys) or about respondents and their (non-adult) children (the 2005 Japanese survey, 1997 New Zealand survey and 2011 Taiwanese survey). And some individual experience-based surveys collected data from all adult members of households (such as the 2001, 2004 and 2006 to 2009 English and Welsh surveys), to enable household experience to be investigated alongside individual experience.38

It can be argued that “there are distinct benefits to the collection of household data, [such as that it] may more accurately reflect the experience of shared problems (i.e. those that are faced by families together), the linking (and counting) of which can be problematic in individual surveys” (Pleasence et al., 2013, p. 24). However, not all problems within households are shared. Some problems are between members of a household (raising obstacles to both the fact and nature of reporting), and individual respondents may be unaware or have a false impression of problems elsewhere within a household (especially when details are deliberately withheld). Moreover, households differ, and although individual data can in some cases be aggregated to household level (if information concerning whether problems are shared is collected), the reverse is generally not possible. Thus, surveys of individuals are generally preferable in the case of legal needs studies.

To properly reflect a population’s experience of justiciable problems, individual respondents should be randomly selected. If a sample frame is composed of households (or similar), and if all members of a household are not interviewed, then individual respondents should be selected at random within households. For reasons similar to those just set out, proxy interviews should be avoided wherever possible. If proxy interviews are deemed necessary, then some questions (e.g. questions concerning domestic violence) become inappropriate.

In the case of surveys of individuals, even if problems can be included twice (e.g. within households) the standard unit of measurement for problem level analyses should be individually experienced problems.

2.4.1. Community problems

Just as household members may share problems, problems may also be shared by other groups of individuals, such as work colleagues and members of the wider community.

Although no legal needs surveys have been designed to measure the experience of justiciable problems by communities (as distinct from within communities), many surveys have nonetheless asked about problems commonly experienced across communities (such as those concerning expropriation, environmental damage, access to public services, etc.). The 2017-2018 Nepal survey went further than most, and also asked about a category of problems concerning “community resources”.

As with problems shared within households, respondents can be asked whether their problems are shared more widely, and with whom. For example, the 2017 General Population Poll asked whether problems were shared with “other people, neighbours, or other members of your community” and whether people took the same position or collective action “to achieve a solution.” Thus, it was possible to identify whether identified problems were instances of larger community problems and whether a community response was taken to such problems.

Questioning along these lines builds up a picture of the volume of shared problems. With additional questioning concerning the extent to which problems are shared, questioning could also shed light on the scale of community problems and (theoretically, at least, subject to sample structure) differences between communities.

2.5. Legal needs survey reference periods

A legal needs survey’s reference period is “the time frame for which survey respondents are asked to report […] experiences of interest” (Lavrakas, 2008). Deciding on the appropriate reference period involves balancing two main factors: ensuring that a sufficient number of problems are reported to enable analysis and reporting as intended, and (particularly in the context of monitoring) the contemporaneity of survey data.39 In deciding how many years to go back, it is important to recognise that recall becomes increasingly unreliable the further back in time it is extended. Moreover, different types of justiciable problems are associated with different “forgetting curves” (Tourangeau et al., 2000, p. 84) (i.e. patterns of recall error over time) (Pleasence et al., 2009). For example, consumer related problems tend to be forgotten more quickly than personal injuries, which, in turn, tend to be forgotten more quickly than divorce (Pleasence et al., 2009). The net effect of this is that the composition of problem samples varies with survey reference periods, and the proportionate increase in volume of problems reported diminishes as reference periods are extended. Experimental evidence indicates that increasing a legal needs survey’s reference period from one to three years has only “a fairly modest” impact on problem reporting (Pleasence et al., 2016). Thus, the 2006 Hong Kong Demand & Supply of Legal & Related Services survey, which asked respondents to recall events from one year, five years and over the entire life course – within a single survey – recorded respective problem prevalence rates of 19%, 32% and 40%.

Looking at the surveys detailed in Table ‎1.1 and Table ‎1.2, slightly fewer than one-third have adopted a three-year reference period,40 and four more a three-and-a-half-year reference period (where January 1st was used to anchor the reference period three calendar years prior to interview). The next most common reference periods were five years41 and four years (the 10 HiiL Justice Needs and Satisfaction surveys), followed by one year.42 Drawing on the review informing this Guide, the World Justice Project’s General Population Poll access to justice module moved from a one-year to a two-year reference period between 2016 and 2017. In the context of the typical sample sizes of legal needs surveys, a one-year reference period is unlikely to yield sufficient problems to enable diverse or detailed analysis of respondents’ problem resolving behaviour. A two-year reference period was also implemented by the 2017-2018 Nepalese survey and the 2017-2018 South African Governance Public Safety and Justice Survey Pilot.

While the vast majority of past legal needs surveys have been cross-sectional in their design, the English and Welsh Civil and Social Justice Survey switched to a panel design in 2010. This change of design gave rise to additional considerations regarding the survey’s reference period and saw a switch from a three-year reference period to an 18-month reference period, to reflect the gap between surveys. Problems not concluded at the time of the 2010 survey were revisited in the 2012 survey.

2.6. Problem resolving behaviour

A central focus of legal needs surveys is problem resolving behaviour. This can extend to a broad range of activities, and past surveys have varied considerably in the types of activity and level of detail asked about.

Survey content is ultimately determined by the choice of questions, which reflect the interests and concerns of the survey’s sponsors and stakeholders. However, in order to generate a comprehensive picture of respondents’ problem resolving behaviour, three separate areas of activity must be addressed:

  • Help seeking

  • Use of processes

  • Other activities that support problem resolution.

Process is discussed in the next section; being distinct in not requiring either initiation or engagement on the part of survey respondents. Help seeking and other activities that support problem resolution are discussed in this section.

2.6.1. Sources of help

All but one of the surveys detailed in Table ‎1.1 and Table ‎1.2, for which details are available,43 explicitly asked whether or not help had been sought from a lawyer, professional or other source. In the majority of surveys, respondents were presented with a list of potential sources of help and invited to respond to each individually. In all but one of the remaining surveys, an open question was used. The anomaly was the 2011 Jordanian survey, which asked only a narrow question about whether an attorney was hired.

The 2007 Bulgarian survey was the only survey not to explicitly ask whether or not help had been sought from a lawyer, professional or other source. However, it did ask (through a broad open question) what respondents had done when faced with problems and, separately, whether they had sought “additional information”. Thus, all of the surveys provided an opportunity for respondents to report help seeking.

As noted previously, more detailed questions can lower the risk of misinterpretation and/or neglect of relevant memories. However, when there are many potential answers, only limited specification may be possible, and care needs to be taken to avoid too narrow a focus. Care also needs to be taken to ensure that details are effectively communicated. Experimental evidence indicates that asking distinct and separate questions about sources of help can significantly increase reporting rates, compared to using a simple list format (Pleasence et al., 2016). In the context of online and other self-completion surveys – from which this evidence was drawn – this makes clear the importance of sustaining concentration across all question elements. In the context of other surveys, such as face-to-face surveys, it points to the value of presenting list items to respondents clearly and separately whenever possible.

The terminology used to describe sources of advice, in generic terms, has varied between surveys. Terms used to date have included “another person”, “someone”, “people”, “professionals” and “organisations”.44 The most common phrasing has been “people or organisations”, which was used in more than one-third of the surveys.

Past surveys have referenced numerous sources of help as examples or as the basis for lists: including lawyers, civil society organisations, the police, unions, community leaders and organisations, employers, politicians, government departments (central, regional and local), religious leaders, the media, health professionals, financial services, school staff, family and friends. It would be impossible to reference them all in any single survey, and inappropriate, as many are specific to particular places. Furthermore, terminology describing legal professionals and services varies from place to place. Consequently, more than 40 different terms have been used to do so in past surveys, including “lawyer”, “attorney”, “solicitor”, “barrister”, “notary”, “paralegal”, “legal executive”, “legal consultant”, “legal aid lawyer”, “government lawyer”, “insurance company legal service”, “legal clinic”, “legal advice office”, “legal aid”, “NGO with free legal advice”, as well as various named services.

Previous legal needs surveys suggest that respondents often struggle to recall the exact name or (professional) nature of a source of help. This is not just a memory, but a comprehension, issue. It is therefore often unreasonable to expect a respondent to understand and recall the precise name or nature of an organisation or other source of help. The question should be asked, but issues regarding validity and reliability need to be understood.

Table ‎2.2 sets out a suggested taxonomy of sources of help to inform the drafting of legal needs survey questions and coding of responses. As with types of justiciable problems, a consistent approach would greatly increase the scope for comparison of findings.

A typology of sources of help might focus on various source characteristics, such as sector (i.e. government, commercial, civil society, community, etc.), degree of specialisation, extent of assistance, cost to clients (e.g. free or paid for), regulatory framework,45 or manner of service delivery. However, mirroring the primary concern of the surveys detailed in Table ‎1.1 and Table ‎1.2– to establish respondents’ recourse to independent and expert help – Table ‎2.2 first distinguishes between help obtained from (i) non-expert friends, family and acquaintances, (ii) legal and professional advice services, and (iii) other sources. Importantly, advice services included in the legal and professional advice services category must provide some information, advice or representation of a legal nature. Table ‎2.2 then distinguishes further between sectors, and then on other bases.

The categories included in Table ‎2.2 reflect those most often referred to in the reports of the surveys detailed in Table ‎1.1 and Table ‎1.2.46 Some categories of sources are conceptually complex and difficult to communicate. Here, the experience of past survey authors can be particularly helpful. For example, HiiL’s use of the phrase “designated formal authority”, in its Justice Needs and Satisfaction Surveys captures the idea of a range of organisations that includes regulators, Ombudsman schemes and civil enforcement authorities.

Table ‎2.2. Illustrative standard sources of help categories for legal needs surveys

Help category

Primary sub-categories

Secondary sub-categories

Tertiary sub-categories

Quaternary sub-categories

Family, friends and acquaintances

With relevant expertise (code below)

Without relevant expertise

Legal and advice sector

Government provided legal/advice services

Legal aid

Legal aid staff service



Provision through private practice

Other government legal services

Government legal/advice services

Public facing legal/advice service

Offender release legal/advice service

Other targeted legal/advice service

Government legal department

Dispute resolution authorities


Police and prosecution authorities

Courts and tribunals




Designated formal authority/agency

Other formal dispute resolution authorities

Mediation, conciliation, arbitration


Independent legal services provided through membership or association

Employment related

Through employer

Through union / professional association

Insurance legal/advice service


Law centres, clinics and legal/advice

Law Centres

agencies (excl. government)

Independent legal/advice agencies


University legal clinics


Private sector lawyers



Specialist advocate/barrister



Issue specific

Other independent advice services

Other professionals

Health and welfare

Health professionals

Social workers



Other government

Administrative department

National, regional, local, etc.



Other civil society/charity

National, regional, local, etc.

Other community

Community leader

Community organisation





Employment related


Trade union / Professional body



2.6.2. Information, advice and representation

The context and form of past legal needs surveys suggest that, when asking about sources of help, surveys have been principally concerned with help that is given personally (and sometimes only personalised forms of such help). This is particularly evident in relation to the (at least) 30 surveys, including all five iterations of the English and Welsh Civil and Social Justice Survey and 10 iterations of the Justice Needs and Satisfaction Survey, which separately asked about information obtained from, for example, websites, books, leaflets, self-help guides and other media (i.e. mass communication) sources.

With the development of sophisticated and intelligent web and app-based services, distinctions between plain information, help given personally and personalised help is likely to become increasingly blurred, requiring questions to adapt and capture more data about help obtained from online resources.

As regards levels of services, while distinctions between information, advice and representation are in general well understood by legal professionals – the first is generic, the second tailored to individual clients’ circumstances, and the third involves action taken on behalf of clients – it is doubtful whether survey respondents easily recognise these sometimes subtle distinctions. Thus, if questions asking about help are intended to be restricted to particular levels of help, they must specify these levels clearly or inquire further into the matter through follow-up questions.

As with sources of help, the terminology used to describe help varies between surveys. Commonly used words have been “advice”, “help”, “information” and “assistance” (in order of prevalence), while the most used phrase has been “information or advice”. The phrase “information, advice or help” has also been used. Although not the most commonly used term, “help” has been the term most often used on its own.

One problem with use of the term “help” is that it is too broad. So, for example, a respondent could report having obtained help from a court to resolve a problem, meaning that a court process helped to resolve the problem.

When drafting questions on sources of help, key considerations are which terms best reflect behaviour of interest, which will be best understood, and which will best assist recall.

2.6.3. Seeking, contacting and receiving

Past surveys have also varied in whether they have included a question or a combination of questions about help having been sought, contact having been made with sources of help, and help having been obtained. Initial questions concerning help have been phrased in terms of (in decreasing order of prevalence) “seeking”, “getting”, “contacting”, “obtaining”, “trying to obtain”, “receiving”, “consulting”, “hiring” and “trying to contact”.

With a significant minority of people who seek help failing to obtain it, asking only about whether help was sought, whether a source of (potential) help was contacted, or whether an attempt was made to obtain help, will leave unanswered the question of how many people managed to actually obtain help in resolving justiciable problems. Thus, most surveys which have asked about help seeking have also asked about whether help was obtained. However, some have not, such as the 2008 Australian and 2011 Moldovan surveys. This should be avoided.

2.6.4. Help from family, friends and the other party

Help from family and friends is generally very different in character to help received elsewhere. It is usually non-expert, not independent and informal. It is therefore important not to conflate such informal help with help sought from an independent source.

Some surveys, such as the 2016 Argentinian survey, have identified help from family and friends separately from help from other sources, while others have asked about all sources of help through the same question. In the case of surveys that have presented lists of potential sources of help to respondents, such as the World Justice Project’s General Population Poll, family and friends has generally been expressly included as a list item.

Occasionally, help from family or friends will be expert, and if so, it is useful to identify this. However, to date, only relatively detailed surveys, such as the English and Welsh Civil and Social Justice Survey, have asked about the expertise and nature of advice obtained from family and friends, as well as from elsewhere.

Also distinct from independent help is help received from the other party to a dispute. This cannot be regarded as independent, whether expert or not. Thus, it should be explicitly excluded, or delineated within core questions designed to identify sources of help. If help provided by the other party is of particular interest, it should be asked about elsewhere.

2.6.5. Reference to “legal” help

As discussed in the context of survey framing and identification of justiciable problems, reference to ‘legal’ help should be avoided. Again, it is likely to narrow respondents’ conceptions of the types of help being asked about and assumes they have an understanding of what constitutes ‘legal’. Knowing a source of help will provide good indication of the character of help provided, and it is also possible to use follow-up questions to explore this in greater detail. This does not mean there should be no references to help from ‘lawyers’ or other ‘legal’ or named sources; although, again, the range of sources of interest should ideally be indicated in lay terms.

2.6.6. Help obtained on behalf of respondents

Reflecting the general tendency to inquire into individual (rather than household, etc.) problem experience, the great majority of surveys have asked only whether respondents personally sought/obtained help. An exception was the 2008 Australian survey, which asked whether “you or a relative or friend on your behalf” had sought/obtained help. Widening the scope of the question can have advantages. Importantly, it recognises that some people may not be able to, or choose not to, access help themselves.

2.6.7. A complete picture of problem resolving behaviour

While some surveys display an interest in only certain aspects of problem resolving behaviour, others are designed to be comprehensive. If a comprehensive picture is sought, then questions should address activities that go beyond obtaining information or advice, or utilising processes. These include evidence gathering and consideration of options.

If residual problem resolving behaviour is not investigated, then it is not possible to identify those respondents who took no action to resolve problems: a group often of interest to stakeholders. Despite this limitation, many survey reports have discussed inaction without a basis for its identification.

2.7. Process

As detailed above, there is a distinction between respondents’ problem resolving behaviour and processes that form part of problem resolution. The latter can occur with or without initiation or engagement by the respondent. Thus, to maintain conceptual clarity, questions about processes should ideally be separate from those about other aspects of problem resolution.

To develop an accurate description of problem resolution, it is necessary to establish whether and what processes form part of problem resolving, who initiates them and whether different parties engage with them. This does not require detailed questioning about processes, only about whether process did or did not play a role in problem resolution. In fact, while some older surveys, most notably the Paths to Justice surveys, devoted significant questionnaire space to enquiring about respondents’ experiences and navigation of formal processes, the rarity of formal processes and lay people’s limited technical understanding and familiarity with them have resulted in more recent surveys shifting focus to early stage problem resolution decision-making.

The great majority of surveys listed in Table ‎1.1 and Table ‎1.2 have investigated processes used as part of problem resolution. The few exceptions, such as the 2006 New Zealand survey,47 have focused narrowly on respondents’ abilities to access help when needed.

2.7.1. Types of process

Surveys have asked about different types of processes. Surveys routinely include some focus on state courts and tribunals, as well as typically exploring “mediation” and “negotiation”.48 Some surveys have also explored other forms of resolution process.

Particular challenges face efforts both to draw up appropriate typologies of process in individual surveys and to compare the findings of different surveys. A great many different dispute resolution processes exist around the world, involving different forms of authority, different types of participant, different rules on standing, different approaches to resolution, and different rules of operation. Similar processes may be referred to using different names and dissimilar processes may be referred to using the same name. For example, although “mediation” may be technically defined as involvement of an independent third party to help different sides come to agreement, without taking sides, offering advice or imposing or requiring agreement, the term is commonly applied to many forms of intermediation, conciliation, arbitration and adjudication.

Table ‎2.3 sets out an illustrative taxonomy of process. Following the primary interest of past surveys in determining the identity of third parties involved in problem resolution, processes are initially split into five categories reflecting different identities (“state”, “community”, “religion”, “other” and “none”), but then further divided according to different approaches to resolution (e.g. intervention, investigation, adjudication, mediation, etc.).

It is notable that past legal needs survey questions have often conflated different aspects of process, such as the identity of the individual or organisation responsible for the process and the nature of the process itself. In many cases, the result has been an inability to disaggregate these different aspects.

In contrast to Table ‎2.1, which lays out illustrative problem categories, Table ‎2.3 places substantial responsibility on survey authors to appropriately map different forms of locally available process into this taxonomy.

Table ‎2.3. Illustrative standard process categories for legal needs surveys

Process category

Primary sub-categories

Secondary sub-categories

Tertiary sub-categories

Quaternary sub-categories

No third party

Direct negotiation (personal)

Indirect negotiation (through representatives)




Hearing, paper based, online, etc.

Investigation, adjudication, mediation, etc.


Enforcement service

Designated formal authority / agency (civil)



Investigation, adjudication, mediation, etc.

Other civil enforcement authority

Prosecution authority



Arrest / prosecution




Arrest / prosecution


Other government


Investigation, adjudication, mediation, etc.





Community leader or organisation (informal)

Investigation, adjudication, mediation, etc.

Indigenous/customary practice

Investigation, adjudication, mediation, etc.


Court (Shariah tribunals, Beth Din, etc.)

Investigation, adjudication, mediation, etc.




Independent third party (not connected to problem)

Mediation or conciliation



Investigation, adjudication, etc.

Organisation connected to problem

Other party is the organisation /

Other party is part of the organisation

Not Internet related

Investigation, adjudication, mediation, etc.

Internet related

Other party not the organisation

(e.g. Ebay Resolution Centre)

Not Internet related

Industry standards body


Organised crime

Investigation, adjudication, mediation, etc.

2.7.2. Contact with process bodies, process initiation and participation

The ways in which parties trigger a process for resolution varies considerably, as does the level of potential engagement on the part of the parties. Perhaps because of this, the terminology used in past surveys to ask about respondents’ and others’ involvement in processes has also varied considerably. Indeed, the terminology used in these questions has differed far more than in relation to other core legal needs survey questions.

As the terminology used in questions seeking to identify processes can have a significant bearing on the nature of what is identified, great care must be taken in the selection of terms, and the appropriateness of terms will vary along with processes.

For example, “contact” with a process body does not equate to either initiation, the existence of or participation in a process. Contact may relate to information or advice seeking; and even in this case does not denote success in achieving what is sought. Despite this, contact has been the most common term used to identify the involvement of courts and tribunals in dispute resolution. Towards the other end of the scale, asking about “participation” in a process will not necessarily identify all instances of process being used.

In addition to these terms, other terms that have been used in relation to courts and tribunals have included (in descending order of prevalence) “appear at”, “go to”, “filed a case/lawsuit”, “turn to”, “appeal to”, “take to”, “make a claim or make use of” and “initiate”.

A narrower range of terms has been used to identify mediation and other processes in dispute resolution. For example, more than one-third of questions seeking to identify use of mediation have asked about “attendance” at mediation sessions, and another third about mediation being “arranged”.

If surveys are to ascertain which party, including third parties, initiated a process, this can most efficiently be asked about as an immediate follow-on question (in respect of each process identified). Similarly, ascertaining the extent of respondents’ involvement in processes is most efficiently addressed through follow-on questions.

2.7.3. Detail and specificity

As noted above, more detailed questions can lower the risk of misinterpretation and/or neglect of relevant memories. Also, as noted above, when questions centre upon lists, such as lists of processes, it is important to present each list item separately to respondents. Distinct and separate questions about processes have been found to substantially increase reporting rates, compared to the use of lists (unless items are individually presented) (Pleasence et al., 2016). Mediation, for example, was reported around four times as frequently when asked about separately, rather than as part of a list in an online survey.

Over half of the surveys in Table ‎1.1 and Table ‎1.2 used lists, which were usually read out to respondents, to ask about different processes. Most of the remaining surveys posed separate questions. A small number used open questions.49

2.7.4. Use of technical language and use of examples

Questions concerning resolution processes should also be constructed using lay language whenever possible. If specifically named processes are asked about or if technical language cannot be avoided, additional description should be provided, unless universal recognition can be assumed. If process categories are unclear, examples should be provided to clarify meaning using a range of examples broad enough to indicate a process’s scope.

Reflecting the broad range of lay interpretations of the terms “mediation” and “conciliation”, the Paths to Justice Scotland Survey provided 10 examples: Advisory, Conciliation and Arbitration Service (ACAS), Comprehensive Accredited Lawyer Mediators (CALM), Centre for Dispute Resolution (CEDR), Mediation Bureau, Academy of Experts, Chartered Institute of Arbitrators, National Family Mediation (NFM), Family Mediation Scotland (FMS), ACCORD and the SFLA. Evidently, fewer examples will generally suffice.

2.8. Whether and how justiciable problems have concluded

2.8.1. Whether or not problems have concluded

The majority of past surveys have asked whether identified problems have been concluded, with most using binary coding (concluded/ongoing) to record responses. The use of binary coding is inappropriate. There may be periods of time in which it is unclear whether problems have been concluded, or whether attempts to resolve them have been abandoned. Thus, a response option reflecting uncertainty is beneficial.

Moreover, there are aspects of problem conclusion that simplistic questioning may obfuscate. For example, disagreements or efforts to resolve problems may be concluded while problems persist, or the substance of problems may be concluded although disagreements persist. Moreover, respondents have sometimes reported that problems have been concluded, only to report them later in a survey as persisting but being “put up with” on a permanent basis.

To fully establish whether problems have been concluded, it is therefore necessary to inquire into whether problems are completely resolved (meaning they no longer exist, and there is no persisting active disagreement), otherwise settled (meaning that all parties have given up all actions to resolve them further), ongoing, or whether it is too early to tell.

The majority of past surveys with questions concerning conclusion have asked whether problems are “over” or “resolved”. An explicit definition of such terms is rare, despite their evident ambiguity.

The small number of surveys that have not enquired about whether problems were concluded have focussed on problem resolving behaviour and process. However, it is important to determine whether problems have been concluded, as problem resolving behaviour and process related data can only be complete in respect of concluded problems.. Reporting ongoing problems as if they were concluded will lead to under-estimation of help seeking and process use.

2.8.2. Manner of problem conclusion

In all, more than thirty different category descriptions have been used in the surveys in Table ‎1.1 and Table ‎1.2 to ask about conclusion, although these can be reduced to eight principal categories, which are sometimes sub-divided and sometimes merged. These are:

  • decision by a third party (often split between courts/tribunals and other third parties);

  • mediation, conciliation and arbitration (often defined as being “formal” or “independent”);

  • action by a third party;

  • agreement between the parties (often split between agreements reached “directly”/ “personally” and agreements through lawyers or other representatives);

  • unilateral action by the other party;

  • unilateral action by the respondent (often split between action to resolve the problem and action to avoid the problem (e.g. move home));

  • the problem sorted itself out; and,

  • the problem is being put up with.

2.9. Perceptions of process and outcome

To understand access to justice and legal need, one needs to know more than just the processes utilised and manner of conclusion. One must understand the quality of resolution process and outcome. Legal needs surveys can help to capture participants’ experiences of different justice processes as well as assessment of outcomes.

The fact that a legal problem has been resolved by an institution does not mean that justice has been done. Around half of the surveys in Table ‎1.1 and Table ‎1.2 for which details are available50 included questions exploring respondents’ perceptions of “quality of process”,51 and a great majority included questions exploring perceptions of “quality of outcome”.52 The greater focus on outcomes suggests greater interest in perceptions of outcome among survey authors. However, “people's perceptions of procedural fairness are […] very important” (Van den Bos et al., 2001, p. 49) in shaping overall judgments of fairness.53

While the seminal surveys of the 1990s included questions concerning respondents’ perceptions of process and outcome (including eight process related questions in the Paths to Justice surveys), the most influential surveys in this topic area have been HiiL’s Justice Needs and Satisfaction Surveys. Historically, most past legal needs surveys have accommodated only a few questions in this area, but the ten Justice Needs and Satisfaction Surveys – informed by HiiL’s Measuring Access to Justice in a Globalising World project54 – have each devoted 19 questions to quality of process, and a further 23 to quality of outcome. Recognising the multidimensionality of process and outcome quality and drawing on a theoretical framework derived from extensive reviews of the literature (Klaming and Giesen, 2008); Verdonschot et al., 2008), the core questions investigating and seeking to measure perceptions of process address “procedural”, “interpersonal” and “informational” justice (14 questions). The core questions investigating and seeking to measure perceptions of outcome address “distributive” and “restorative” justice, along with outcome “functionality” and “transparency” (20 questions).

Procedural justice refers to various properties that a procedure should possess “in order to be perceived as fair by its user” (Klaming and Giesen, 2008, p. 3) including “voice, neutrality, trustworthiness, consistency, and accuracy” (Gramatikov et al., 2011, p. 361). Interpersonal justice “reflects the degree to which people are treated with politeness, dignity, and respect by authorities and third parties involved in executing procedures or determining outcomes” (Colquitt et al., 2001, p. 427). Informational justice is concerned with “explanations provided to people that convey information about why procedures were used in a certain way or why outcomes were distributed in a certain fashion” (Colquitt et al., 2001, p. 427). Distributive justice concerns the fair distribution of benefits and burdens (that in this context constitute a justiciable problem’s outcome),55 while restorative justice “is the dimension of the outcome which rectifies the damage or loss suffered … [as a result of the] problem” (Gramatikov et al., 2011, p. 363). Functionality of outcome “is the extent to which the outcome solves the problem” (Gramatikov et al., 2011, p. 363), and transparency of outcome concerns explanations for outcomes and the ability to compare the outcomes of similar problems.

Seen through this lens, the Paths to Justice surveys’ eight questions on perceptions of process concerned procedural and informational justice; while the United States Comprehensive Legal Needs Study focused on procedural and interpersonal justice. Among more recent surveys that have explored perceptions of process, the 2012 Colombian survey employed 21 questions56 to address multiple aspects of procedural, interpersonal and informational justice including “voice”, “neutrality” and “trustworthiness”.57 At the other end of the scale, the 2012 Georgian survey posed individual questions about procedural, interpersonal and informational justice respectively, while the 2016 Argentinian survey included single questions about interpersonal and informational justice. The 2011 Moldovan survey and World Justice Project’s 2016 and 2017 General Population Poll included single questions on overall process fairness.

Few surveys, other than the Justice Needs and Satisfaction Surveys, have addressed the multiple dimensions of outcome quality, and while the Justice Needs and Satisfaction Surveys have devoted around two-dozen questions to exploring the dimensions of outcome quality described above, the 2017 Sierra Leonean survey – notable in also addressing all four dimensions – asked just one question in respect to each. When surveys addressed respondents’ perceptions of outcomes, they usually asked only one or two questions using more or less the same format.

Among the 37 surveys known to have included supplementary questions on perceptions of outcomes, 36 asked about the extent to which outcomes were perceived to be “fair”, “satisfactory” or both (24, 21 and 11 surveys, respectively). A significant minority of surveys also sought to identify the extent to which outcomes were favourable to respondents (a task most appropriate to zero-sum disputes) and/or the extent to which they were seen to meet respondents’ objectives in acting (16 surveys, in both cases).

2.9.1. Defining the subject matter of process questions

With regard to the subject matter of inquiries into respondents’ perceptions of process and outcome, questions may deal with either (i) specified processes (such as specific court or mediation processes), of which a number may be involved in resolving a particular problem; or (ii) the problem resolution process as a whole.

In the first case, in order to address all identified processes, a greater number of questions must be asked. However, inquiring into specific processes allows for comparisons to be drawn between processes. Although inquiring into the problem resolution process as a whole also allows for comparisons, the relatively ill-defined subject matter of questioning (owing to the inability to isolate individual processes for those who use more than one) is problematic.

2.10. The cost of resolving justiciable problems

Cost is commonly considered “a central barrier to obtaining legal assistance” (Pleasence and Macourt, 2013, p. 1) and, hence, a significant barrier to accessing justice and a factor in unmet legal need. Questions about the costs of legal assistance are often of central importance to national policymakers. Almost all of the surveys detailed in Table ‎1.1 and Table ‎1.2 inquired directly or indirectly into the cost of resolving justiciable problems.58

Almost all the surveys obtained information about cost concerns; most often indirectly, in the context of problem resolution strategy decision-making.59 The great majority also asked direct questions about the financial costs incurred in seeking to resolve problems,60 with levels of expenditure,61 the nature of fee arrangements and subsidies commonly investigated. Very specific questions of interest to particular survey stakeholders have also sometimes been asked, such as whether costs were researched or negotiated.62 However, the number of questions devoted to inquiring into the cost of problem resolution varied considerably, from 31 in the Paths to Justice surveys to just one in the 2016 Mongolian survey. Across all the surveys, the median number of questions asked was seven.

Those surveys that included only a small number of dedicated cost-related questions usually sought to determine whether respondents had to pay for legal services and how much they paid (e.g. the 2005 Japanese survey),63 or how expensive services were considered to be (e.g. the 2015 Polish and 2016 Mongolian surveys). Some also asked about help or financial support obtained from legal aid or similar sources. This has been standard practice in surveys undertaken in jurisdictions with established legal aid schemes and in surveys intended to inform the institution or development legal aid.

2.10.1. Types of cost

HiiL’s Measuring Access to Justice in a Globalising World project made evident that “the costs a claimant encounters on a path to justice can be very diverse” (Barendrecht et al., 2006, p. 13). Moreover, they can be measured “not only in terms of money, but also in terms of time64 and emotional costs (e.g. stress)” (Barendrecht et al., 2006, p. 5). Just under half of the surveys reviewed sought to ascertain the total financial cost of resolving problems;65 two-fifths sought to ascertain the cost in time;66 and a similar number sought to establish the emotional cost.67 However, only the Justice Needs and Satisfaction Surveys have sought to quantify all three of these costs types. They have also sought to quantify constituent costs of each type: first asking about respondents’ expenditure on various aspects of problem resolution, then about the time spent engaging in various activities, and finally about the emotional impact of problem resolution processes and their impact on “important relationships”.68 Twenty-three questions, including sub-questions, were required to do this: nine on financial costs, nine on temporal costs, and five on emotional costs.

In quantifying the financial cost of resolving problems, specific cost items mentioned in past surveys have included: lawyer and other advisor fees, court and other process fees, travel costs, communication related costs, evidence and information collection costs (including the cost of professional witnesses), bribes/”kick-backs”, reimbursement of witnesses’ incidental costs, domestic costs (e.g. babysitter, house cleaner), and loss of salary/business (to enable problem resolution).

Sometimes, as with the two most recent Canadian surveys, respondents were given a list of cost items and asked to provide only their aggregate financial cost. This is likely to yield less accurate estimates than asking for the cost of the items separately, as was done in the Justice Needs and Satisfaction Surveys, and the 2010 Ukrainian and 2011 Moldovan surveys. In any event, to arrive at an aggregate figure, respondents need to address each constituent item. Asking about them individually both prompts and provides time for appropriate recollection. In the case of the last two surveys mentioned, cost questions also extended to the broader economic impact of problems themselves. This form of questioning about financial costs can therefore be functionally similar to the more detailed forms of questioning about problem impact discussed above.

In quantifying the time cost of resolving problems, the Justice Needs and Satisfaction Survey asked the most detailed questions; seeking separate estimates for the time spent on activities such as searching for a legal advisor, communication with advisors (and others), document preparation, attending hearings and “hanging around” (e.g. in lines, for hearings, etc.).

In quantifying the emotional cost of resolving problems, the Justice Needs and Satisfaction Survey asked how stressful processes had been, to what extent they made respondents feel frustrated, to what extent they made respondents angry, and to what extent processes were humiliating.69

2.10.2. “Free” services and financial support

The great majority of past surveys have sought to identify whether respondents had to personally pay for any legal services obtained, and half have sought to establish whether fees have been met, or contributed to, from elsewhere.70

The question of personal payment is not as straightforward as it seems. There is ambiguity in the case of services funded from pooled resources, such as legal expenses insurance or membership subscriptions (e.g. union subscriptions). Here, there are both direct and indirect payments, and questions should specify which payments are relevant.

In the case of subsidies, if subsidies must be applied for, then their existence is within the purview of individual respondents. However, respondents will not always understand or recall applications for financial support or the identity of subsidising bodies. For example, respondents may confuse an application for financial support with other documents they or an intermediary prepared. When subsidies are provided on other bases, their existence is unlikely to be within the purview of individual respondents. The origins of the funding of ‘free’ services can be both opaque and multifarious. For instance, free services may be provided as part of a marketing strategy, on a voluntary basis, through charitable support or from state subsidy. Respondents cannot ordinarily be expected to have insight into this matter.

Nor can respondents be expected to have insight into the amount of any subsidy if the subsidy is hidden from them. Indeed, even when financial support is provided from a source known to a respondent, and in relation to an individual case – whether through legal aid, by an employer or another source – details of the amount of support may never be known or, if known, may be forgotten more readily than for personal expenditure. Thus, in the context of a legal needs survey, extensive investigation into the nature and amount of financial support provided to respondents – along the lines of the 31 cost-related questions asked in the Paths to Justice surveys – is unlikely to deliver accurate results. It is noteworthy that the successor surveys to the Paths to Justice survey in England and Wales – the English and Welsh Civil and Social Justice Survey – included just 16 cost-related questions in 2001, 11 in 2004, 10 in 2006 and five in 2010.

While it is entirely appropriate to ask respondents whether they applied for/received support from legal aid, a union, legal expenses insurance, etc., further questioning can only reliably focus on costs personally met by respondents.

2.11. Legal capability and legal empowerment

The ability of individuals to respond effectively to justiciable problems – and, linked to this, the support that may be required to meet legal needs – varies with legal capability.71 The concept of legal capability centres on the “range of capabilities” (Pleasence et al., 2014, p. 136) necessary to make and carry through informed decisions to resolve justiciable problems.72 There is no consensus on the precise constituents of legal capability, but there is much agreement among recent accounts of the concept. All reference, to some extent, the following constituents: the ability to recognise legal issues;73 awareness of law, services and processes; the ability to research law, services and processes; and the ability to deal with law related problems (involving, for example, confidence, communication skills and resilience).74

The great majority of surveys detailed in Table ‎1.1 and Table ‎1.275 included questions concerning one or more of these four constituents of legal capability. Most asked respondents about their awareness or familiarity (through prior use) of legal services.76 The majority of questions concerned legal capability in general,77 rather than referencing reported problems.78 However, while a person’s general legal capability is increasingly understood to play a role in legal problem resolution behaviour, specific capability in handling individual legal problems is important to understand in the context of legal need (being relevant to, for example, whether people obtain appropriate support).

Although questions about awareness of (and, sometimes, also prior use of) services is routine, only a handful of surveys have asked about respondents’ professed knowledge of their legal position,79 and always in relation to reported problems.80 A few surveys (though not the same surveys81) have also posed questions about respondents’ awareness of dispute resolution processes. This relative lack of questioning about legal understanding suggests a greater concern with people’s ability to obtain information and support when required, than with their ability to independently progress legal cases.82 This focus of concern may also partly explain the slightly greater number of surveys that have asked respondents whether they regarded justiciable problems as having a “legal” dimension, a matter now recognised as having a substantial bearing on whether help is sought from “legal” services.83

As regards the ability of individuals to deal with law related problems, a significant number of surveys84 have investigated respondents’ confidence in resolving justiciable problems (although just two of these surveys have done this in relation to reported problems: the 2005 Japanese85 and 2012 Tajik survey). The majority of these surveys have adopted variants of the “subjective legal empowerment”86 (SLE) questions used in HiiL’s Justice Needs and Satisfaction Surveys. These have involved asking respondents how likely they think it would be that they would get a fair solution (or, separately, a solution and a solution that is fair87) to a justiciable problem. The questions focus on problems involving six types of opposing party: a debtor, an employer, a family member, a neighbour, a government authority and a retailer.

Questions such as these are simple to implement and give some insight into legal confidence. They can also provide a foundation for supplementary questions concerning knowledge of law and awareness of legal services. However, questions seeking to measure underlying traits (such as legal confidence) rather than observable phenomena, involve significant conceptual and technical challenges and require substantial testing. In the case of questions such as those used so far to investigate SLE, exploratory analysis88 of data from two recent surveys – both undertaken for the specific purpose of developing standardised measures of legal confidence and attitudes to law – identified issues with their psychometric properties (Pleasence and Balmer, forthcoming),89 indicating that additional developmental work is needed in order for them to function appropriately as an effective SLE scale. Alternative approaches to measuring legal confidence that were tested through these surveys – one based on independent questions, and one based on unfolding scenarios – yielded three working standardised legal confidence scales: a 6-item (scenario escalation based) “General Legal Confidence” (GLC) scale, a 6-item legal self-efficacy (LEF) scale, and a 4-item legal anxiety (LAX) scale (Pleasence and Balmer, forthcoming).

2.11.1. Generic aspects of legal capability

In addition to the unique aspects of legal capability, some generic aspects are also commonly asked about in legal needs surveys through demographic questions. For example, past demographic questions have addressed level of education, income, technological resources, social capital and disability. Thus, in drafting demographic questions for use in legal needs surveys, consideration should be given to their appropriateness as potential capability proxies.

2.12. Measuring legal need and unmet legal need

Despite their name, few legal needs surveys have sought to operationalise the concepts of legal need and unmet legal need for the purposes of measurement. This reflects the fact that measures of legal need are inevitably both crude and contentious, since the concept “cannot be measured directly” (Ignite Research, 2006, p. 10), is complex, contested and to a large extent political. Rather than seek to define and measure legal need, recent surveys have therefore tended simply to investigate aspects of need, such as the relative seriousness of problems,90 legal capability, resolution strategy choices, and obstacles and regrets. This can provide a broad picture of the nature of the justiciable problems that people face, and people’s capability, behaviour, success or failure in resolving them. It also provides a basis for survey stakeholders to apply their own concepts of legal need and unmet legal need, within the constraints of the data collected. This may often be a sensible approach. Although there are empirical aspects of legal need, evidently “there are normative aspects here as well” (Sandefur, 2016, p. 451).

Nevertheless, attempts continue to be made to develop and refine proxy measures of legal need and unmet legal need, with evident increasing complexity and sophistication (it being argued that “a more comprehensive approach … provides a better basis” for measurement91). Although competing definitions of legal need continue to be offered, the assumptions on which they are based are now far better appreciated and understood. While it may once have been assumed that the experience of “legal” problems without recourse to lawyers is equivalent to a “factual need” for legal services,92 it is now well-recognised there can be many appropriate responses to problems with a legal dimension, some of which may involve neither input from legal services nor any reference to law. Commentators have pointed out the relevance of context, highlighting the relevance of advantages and disadvantages (including cost) of different potential responses to “legal” problems in determining legal need.93 More recently, emphasis has also been placed on capability, options and choice.94

Thus, as described in ‎Chapter 1. , it is now broadly agreed that legal need arises whenever a deficit of legal capability necessitates legal support to enable a justiciable issue to be appropriately dealt with. A legal need is therefore unmet if a justiciable issue is inappropriately dealt with as a consequence of the unavailability of (suitable) legal support to make good a deficit of legal capability. But the question remains as to what constitutes appropriateness, what forms of support might be necessary, who should act as arbiter and what comprises legal capability.

Explicit operationalisations of the concepts of legal need and unmet legal need have been undertaken in the context of the 2006 New Zealand,95 2012 Colombia96 and 2016 Argentinian97 surveys. In 2017, Colombia’s Department of National Planning also developed an index of effective access to justice that relied heavily on legal needs measures.98

Recognising the limits of the proxy measures used, the New Zealand approach involved a three-way segmentation of need as “definitely having been met”, “definitely not having been met” and possibly either met or unmet. A further distinction was made in cases in which need was deemed to have been met into cases that involved difficulties in securing help and cases that did not. In simple terms, legal need was deemed to have been met if there was agreement between the parties, a problem concluded through mediation, or a problem concluded with the help of someone other than a mediator or family and friends, and the if help was described as useful. Unmet legal need was taken to include cases where no action was taken because it was not known what to do, problem resolution was abandoned, and no help was sought because of specified barriers (including language, cost and fear). In addition, trivial problems and problems that resolved themselves were excluded from all calculations.

Also recognising the limits of the proxy measures used, the Colombian approach involved setting out various definitions of unmet legal need. In its widest interpretation, unmet legal need was taken to encompass all cases other than those in which parties were reported to have complied with a judgement. In its narrowest, it was taken to encompass only cases that either involved a judgement or settlement that was not complied with, involved no action being taken or action being abandoned and dissatisfaction with that decision, or were either concluded on still ongoing after a defined period of time. The authors commented, “even complex cases should have some kind of substantive decision after two years.” (La Rota et al., 2012, pp. 99-100).

Figure ‎2.1. Logic tree for proxy measurement of legal need and unmet legal need
Figure ‎2.1. Logic tree for proxy measurement of legal need and unmet legal need

Like the New Zealand approach, the Argentinian operationalisation encompassed all four elements of the definitions of legal need and unmet legal need set out above: appropriateness, necessity, legal support and legal capability. The questions it drew upon asked about respondents’ legal capabilities and satisfaction with assistance and outcomes, to enable unmet need to be measured as the proportion of respondents “who did not consider themselves capable of solving justiciable problems through their own knowledge and ability ... [and] were not satisfied with assistance received or with the outcome in cases in which they obtained no assistance” (Subsecretaría de Acceso a la Justicia (Ministerio de Justicia y Derechos Humanos), 2016, p. 23).

Figure ‎2.1 sets out a framework, in the form of a logic tree, for measuring legal need that draws upon the New Zealand, Colombian and Argentinian measures of legal need and unmet legal need. However, it references process fairness rather than outcomes – as process fairness can be addressed through policy, and fair outcomes are broadly reliant on fair processes – and introduces legal awareness/understanding.

Building on the New Zealand triviality filter, the framework not only excludes trivial problems from need calculations, but also assumes all of the most serious problems involve unmet need if expert help is not obtained. As has been argued in the case of advice in the police station (Pleasence et al., 2011c), some problems are so serious that a person will need legal support irrespective of their professed legal capability (at least in the “normative” or “comparative” senses set out in Bradshaw’s taxonomy of social need) (Bradshaw, 1972). For example, anybody arrested on suspicion of a serious offence, such as rape or murder, needs legal support.

As an initial step, the framework includes a definition of legal need, which is either “met” or “unmet”. No legal need arises in the case of trivial problems or in the case of moderately serious problems if respondents have legal knowledge, legal confidence and consider the resolution process fair.

Questions that can be used to populate the framework are detailed in ‎Chapter 3. ‎Chapter 4. also explores, more generally, how access to justice indicators can be constructed from legal needs surveys.

2.13. Consistency of approach and the comparability of legal needs survey findings

A broad range of factors affects the comparability of data from different legal needs surveys. The impact of methodological differences has been discussed extensively elsewhere (Pleasence et al., 2013a, 2016).99 However there has been relatively little discussion of how data might be specified to promote comparability.

Data comparability requires that data at different levels of detail are investigated within a consistent conceptual and taxonomical framework, and that more detailed data can be made equivalent to less detailed data. This is possible only if more detailed data encompasses all, but no more than, the elements of less detailed data. Figure ‎2.2 illustrates the necessary relationship between data at different levels of detail in order for it to be comparable. As can be seen, all 16 items (and only these 16) feature in all five category sets. Thus, any category set can be aggregated to any lower detail category set. The category sets need not be constructed symmetrically, as in Figure ‎2.2. But each new level of detail must involve sub-dividing lesser detail categories in order to maintain compatibility. If there is re-allocation of items between sub-categories, the sets may become incompatible.

In the case of justiciable problem types, if an investigation of more narrowly defined problems involves asking about all constituent problem types of a broader (i.e. less detailed) category (as defined in a Table ‎2.1 type taxonomy), and problems are asked about in a way that allows them to be aggregated to the broader category without the inclusion of any additional problems, then there is full comparability between the more and less detailed problem category data.

Asking survey questions at different levels of detail will impact on responses. As has been noted above, the provision of additional detail in relation to, say, justiciable problems, increases accuracy of reporting and reported incidence (Pleasence et al., 2016). The more detail provided, the less respondents are left to interpret the scope of questions, and the lower the risk of misinterpretation and/or that relevant memories will be neglected.

Figure ‎2.2. Compatible category data at different levels of detail
Figure ‎2.2. Compatible category data at different levels of detail

2.14. A framework for asking questions: survey structure

Legal needs surveys investigate people’s experience of justiciable problems and, in particular, the strategies, the type of help sought and the processes used in their resolution. This subject matter involves the collection of multiple levels of data. Justiciable problems are unevenly distributed among organisations, households and individuals; some experience few or none, while others experience many. In turn, strategies, help seeking, and processes are unevenly distributed across problems. For example, some problems may involve one source of help, while others involve none or multiple sources. So, data can relate to, say, households and, within them, people and, within them, problems and, within them, sources of help, etc.

This has two important implications. The first is that, if unique data is required for multiple individual problems, strategies, sources of help or processes, then surveys must include “loops” and “sub-loops” of questions to systematically address each of these. The second is that it is impracticable to ask follow-up questions about each and every problem; nor, in the case of problems that are followed-up, detailed questions about each and every strategy, source of help or process. Numerical limits must be applied to avoid excessive length. The potential for follow-up is a function of the amount of detail sought and the duration of interviews.

2.14.1. A modular survey approach

Figure ‎2.3 illustrates the typical hierarchical structure of legal needs survey data. Sources of help and processes are nested within problems, which are nested within people, who are nested within households. Often legal needs survey data is compiled into separate person level and problem level datasets. Sources of help and process level datasets are also often possible.

Figure ‎2.3. Example legal needs survey data structure
Figure ‎2.3. Example legal needs survey data structure

Hierarchical data of the type typically collected through legal needs surveys is most naturally reflected in a modular questionnaire design. Within a modular design, the various data strata become distinct subjects of enquiry, and questions concerning them constitute distinct “modules”. Modules can be repeated within interviews, as necessary, in order to address, for example, multiple instances of problems. Within these modules, questions dealing with the same sub-topic can also be viewed as modules. These topic-based modules do not relate to distinct data levels, but this approach helps to give clarity to survey data, and facilitates survey design and analysis. Designing legal needs survey questionnaires as a combination of specific structural and topic-based modules – linking to data structure and the various topics of study – helps tie questionnaires to their defining research questions, clarify which topics are central and which peripheral, and make apparent the scale of sub-sampling required in order to keep interviews to a defined duration. If surveys are repeated, or questionnaires shared between surveys, a modular design makes the process of refinement relatively easy to manage, as modules can be worked on independently and substituted. Figure ‎2.4 illustrates a model legal needs survey questionnaire structure.

Figure ‎2.4. Model legal needs survey questionnaire structure
Figure ‎2.4. Model legal needs survey questionnaire structure

2.14.2. Sub-samples

Because some respondents report multiple problems – and, within problems, multiple sources of help or processes – and there is limited time available to ask follow-up questions, it is necessary to employ sub-sampling. This poses various methodological challenges. One example concerns how problems are reported. Respondents who report only one problem may provide data about “all” their problems, while those reporting many problems may provide data about only one or some. The resulting sample is far from representative of problems as a whole; and weighting down problems reported by those who report only one problem greatly reduces the effective sample size. Related to this is the challenge of determining an appropriate method of sub-sampling. No method is perfect (in practice, at least), but some methods are more problematic than others.

For the Paths to Justice surveys, single problems within problem categories were selected for follow-up,100 and when more than one problem was reported in a category, the second most recent was selected for follow-up. The reason for selecting the second most recent problem (in preference to the most recent) was the increased likelihood that sufficient time would have elapsed for resolution to have been achieved. Given that a significant proportion of justiciable problems reported by respondents are not concluded by the time of the interview, this is a reasonable approach. It delivers data for a reasonably diverse set of problems, although older problems may not always play out in the same way as newer problems. Selection from only concluded problems is problematic, as many problems are ongoing at the time of interview. For example, the 2012 Macedonian survey – which adopted the Paths to Justice surveys’ approach of selecting second most recent problems for follow-up – found that only a “disappointing” (Srbijanko et al., 2013, p. 60) 38.6% of reported problems had been concluded. Thus, a sample of concluded problems is likely to be much smaller than a sample of all problems. Also, while most ongoing problems will be new, some will be atypical or intractable. Thus, a sample of concluded problems fails to shed light on these problems and may bias it towards easier to resolve or less severe issues.

Another problematic yet common form of problem sub-sampling involves only respondents’ most serious problems being followed-up. This has the superficial attraction of yielding a set of more serious problems for analysis. However, the resulting sample is even more difficult to characterise than samples obtained using the methods described above. The most serious problems of those respondents who report only one problem (a significant proportion of respondents) may be relatively trivial, and the most serious problems (overall) may cluster within individuals. Thus, samples of problems obtained via this approach are not samples of the most serious problems, but of the problems seen as being their most serious by each individual respondent. A better approach is to assess the seriousness of all problems at the time they are reported, and then randomly select from those that meet an appropriate seriousness threshold.

Compounding the difficulty of achieving a representative sample of problems for follow-up, the rarity of some problems means that aggressive sampling (in particular) can result in an unviable number of such problems being included in a sample, making analysis more difficult. Again, as many legal needs survey respondents report only one problem, this is a difficult issue to address. One approach that has been taken is to weight the probability of problem selection in favour of rarer problems, and thus select more of them. The disadvantage of this approach is that it further reduces sample efficiency.

Turning to sources of help, similar challenges are apparent to those just discussed in relation to problems. Surveys have asked about, for example, the first, last, most useful and most impactful source of help. All are problematic. The initial source of help is more likely to be inappropriate or a “stepping stone”, and samples of them will yield a picture reflecting this. Last sources are more likely to be legal, in part because a person may only consult a lawyer as a final step. And samples of the most useful or most impactful sources will paint too pretty a picture of every source. Sub-sampling of advisors should therefore be avoided, or, if necessary, carefully designed with a mind to the questions that will be asked of the data.

Finally, no previous surveys seem to have sub-sampled processes. This reflects the limited number of processes ordinarily associated with individual problems and the limited number of processes that are followed-up. In earlier surveys, processes were a key focus of investigation, but now only a limited number of questions are generally asked about individual processes.


Bandura, A. (1977), Self-Efficacy: Toward a Unifying Theory of Behavioral Change, Englewood Cliffs, Prentice-Hall, NJ.

Barendrecht, M., M. Gramatikov, I. Giesen, M. Laxminarayan, P. Kamminga, L. Klaming, J.H. Verdonschot and C. van Zeeland (2010), Measuring Access to Justice in a Globalising World, HiiL, The Hague.

Barendrecht, M., Y.P. Kamminga and J.H. Verdonschot (2008), “Priorities for the justice system: Responding to the most urgent legal problems of individuals”, TILEC Discussion Paper No. 2008/011, TISCO Working Paper No. 001/2008), Tilburg University Faculty of Law, Tilburg.

Barendrecht, M., J. Mulder and I. Giesen (2006), How to Measure the Price and Quality of Access to Justice?, Tilburg University, Tilburg.

Bond, T. and C.M. Fox (2015), Applying the Rasch Model: Fundamental Measurement in the Human Sciences, 3d edition, Routledge, New York.

Bradshaw, J. (1972), “Taxonomy of social need”, in G. McLachlan (ed.), Problems and Progress in Medical Care: Essays on Current Research, 7th series, Oxford University Press, London.

Canadian National Action Committee on Access to Justice in Civil and Family Matters (2013), Responding Early, Responding Well: Access to Justice Through the Early Resolution Services Sector, National Action Committee on Access to Justice in Civil and Family Matters, Ottawa.

Chen, K.‐P., K.‐C. Huang, Y.-L. Huang, H.-P. Lai and C.-C. Lin (2012), “Exploring advice seeking behavior: Findings from the 2011 Taiwan Survey of Justiciable Problems”, Paper presented at the Law and Society Association Conference, Honolulu, 7 June 2012.

Collard, S., C. Deeming, L. Wintersteiger, M. Jones and J. Seargeant (2011), Public Legal Education Evaluation Framework, University of Bristol Personal Finance Research Centre, Bristol.

Colquitt, J.A., D.E. Conlon, M.J. Wesson, C.O.L.H. Porter and K.Y. Ng (2001), “Justice at the millennium: A Meta-analytic review of 25 years of organizational justice research”, Journal of Applied Psychology, Vol. 86, pp. 425-445.

Coumarelos, C., D. Macourt, J. People, H.M. McDonald, Z. Wei, R. Iriana and S. Ramsey (2012), Legal Australia-Wide Survey: Legal Need in Australia, Law and Justice Foundation of New South Wales, Sydney.

Davey, H.M., A.L. Barratt, P.N. Butow and J.J. Deeks (2007), “A one-item question with a Likert or visual analog scale adequately measured current anxiety”, in Journal of Clinical Epidemiology, pp. 356-360.

Dignan, T. (2006), Northern Ireland Legal Needs Survey, Northern Ireland Legal Services Commission, Belfast.

Farrow, T.C.W., A. Currie, N. Aylwin, L. Jacobs, D. Northrup and L. Moore (2016), Everyday Legal Problems and the Cost of Justice in Canada, Canadian Forum on Civil Justice, Toronto.

Ferreira-Valente, M.A., J.L. Pais-Ribeiro and M.P. Jensen (2011), “Validity of four pain intensity rating scales”, in Pain, Vol. 152(10), pp. 239-404.

Franklyn, R., T. Budd, R. Verrill and M. Willoughby (2017), Findings from the Legal Problem Resolution Survey, Ministry of Justice, London.

Galesic, M. and M. Bosnjak (2009), “Effects of questionnaire length on participation and indicators of response quality in a web survey”, in Public Opinion Quarterly, Vol. 73(2), pp. 349-360.

Genn, H. (1999), Paths to Justice: What People Do and Think About Going to Law, Oxford.

Genn, H. (1997), “Understanding civil justice,” in Current Legal Problems, Vol. 50(1), pp.155–187.

Goldstein, H. (2011), Multilevel Statistical Models, 4th edition, Wiley Chichester.

Gramatikov, M.A. and R.B. Porter (2011), “Yes i can: Subjective legal empowerment”, Georgetown Journal on Poverty Law and Policy, Vol. 18(2), pp. 169-199.

Gramatikov, M.A., M. Barendrecht and J.H. Verdonschot (2011), “Measuring the costs and quality of paths to justice”, in Hague Journal of the Rule of Law, Vol. 3, pp. 349-379.

Gindling, T.H. and D. Newhouse (2013), Self-Employment in the Developing World: Background Paper for the World Development Report 2013, World Bank, Washington, DC.

Griffiths, J. (1980), “A comment on research into legal needs”, in E. Blankenburg (ed.), lnnovations in the Legal Services, Oelgeschlager, Gunn and Hain, Cambridge, Mass.

Groves, R.M., F.J. Fowler, M.P. Couper, J.M. Leprowski, E. Singer and R. Tourangeau (2009), Survey Methodology, 2nd edition, John Wiley and Sons, Hoboken, New Jersey.

Hawker, G.A., S. Mian, T. Kendzerska and M. French (2011), “Measures of adult pain: Visual Analog Scale for Pain (VAS Pain), Numeric Rating Scale for Pain (NRS Pain), McGill Pain Questionnaire (MPQ), Short‐Form McGill Pain Questionnaire (SF‐MPQ), Chronic Pain Grade Scale (CPGS), Short-Form 36 Bodily Pain Scale (SF-36 BPS), and Measure of Intermittent and Constant Osteoarthritis Pain (ICOAP)” in Artrhritis Care and Research, Vol. 63(S11), pp. S240-S252.

Hayes, M.H.S. and D.G. Patterson (1921), “Experimental development of the graphical rating method”, in Psychological Bulletin, Vol. 18, pp. 98-99.

Himelein, K., N. Menzies and M. Woolcock (2010), “Surveying justice: A practical guide to household surveys”, Justice and Development Working Paper Series, Vol. 11/2010, World Bank Justice Reform Practice Group, Washington, DC.

Hjermstad, M.J., P.M. Fayers, D.F. Haugen, A. Caraceni, G.W. Hanks, J.H. Loge, R. Fainsinger, N. Aass, S. Kaasa and EPCRC (2011), “Studies comparing numerical rating scales, verbal rating scales, and visual analogue scales for assessment of pain intensity in adults: A systematic literature review”, in Journal of Pain and Symptom Management, Vol. 42, pp. 1073-1093.

Ignite Research (2006), Report on the 2006 National Survey of Unmet Legal Needs and Access to Services, Legal Services Agency, Wellington.

Institute of Social Studies and Analysis (2012), KAP Survey Concerning Justiciable Events in Georgia, Open Society – Georgia Foundation, Tblisi.

Kemp, V., P. Pleasence and N.J. Balmer (2007), The Problems of Everyday Life: Crime and the Civil and Social Justice Survey, Centre for Crime and Justice Studies, London.

Klaming, L. and I. Giesen (2008), “Access to justice: The quality of the procedure”, TISCO Working Paper Series on Civil Law and Conflict Resolution Systems, Vol. 002/2008, University of Utrecht, Utrecht.

Kobzin, D., A. Chernousov, R. Sheiko, M. Budnik, M. Kolokolova and S. Scherban (2011), The Level of Legal Capacity of the Ukrainian Population: Accessibility and Effectiveness of Legal Services, International Renaissance Foundation and Kharkov Institute of Social Research, Kharkov.

La Rota, M.E., S. Lalinde and R. Uprimmy (2013), Encuesta Nacional de Necesidades Jurídicas Análisis General y Comparativo Para Tres Poblaciones, Dejusticia - Centro de Estudios de Derecho, Justicia y Sociedad, Bogota.

Lavrakas, P.J. (2008), Encyclopaedia of Survey Research Methods, Sage, Thousand Oaks, California.

Legal Services Agency (2006), Technical Paper: Defining Legal Need and Unmet Legal Need, Legal Services Agency, Wellington.

Legal Services Board (2017), Prices of Individual Consumer Legal Services 2017: An Analysis of a Survey of Prices Quoted for Commonly Used Legal Services, Legal Services Board, London.

Lewis, P. (1973), “Unmet legal needs”, in P. Morris, R. White and P. Lewis (eds.), Social Needs and Legal Action, Martin Robertson, Oxford.

Lind, E.A. and T.R. Tyler (1988), The Social Psychology of Procedural Justice, Plenum, New York.

Marks, F.R. (1976), “Some research perspectives for looking at legal need and legal services delivery systems” in Law and Society Review, Vol. 11, pp. 191-205.

Maslow, A.H. (1943), “A theory of human motivation,” in Psychological Review, Vol. 50, pp.370-396.

McCann, M. (2006), “On legal rights consciousness: A challenging analytical tradition”, in B. Fleury-Steiner and L.B. Nielsen (eds.), The New Civil Rights Research, Ashgate, Aldershot.

Murayama, M. (2007), “Experiences of problems and disputing behaviour in Japan”, in Meiji Law Journal, Vol. 14, pp. 1-59.

OXERA (2011), A Framework to Monitor the Legal Services Sector, Oxera (for the Legal Services Board), Oxford.

Parle, L.J. (2009), Measuring Young People’s Legal Capability, Independent Academic Research Studies and PLEnet, London.

Pleasence, P. (2006), Causes of Action: Civil Law and Social Justice, 2nd edition, The Stationery Office, Norwich.

Pleasence, P. and N.J. Balmer (forthcoming), “Development of a general legal confidence scale: A first implementation of the Rasch measurement model in empirical legal studies,” Journal of Empirical Legal Studies.

Pleasence, P. and N.J. Balmer (2017), “It’s personal: Business ownership and the experience of legal problems”, Justice Issues, Vol. 24, Law and Justice Foundation of New South Wales, Sydney.

Pleasence, P. and N.J. Balmer (2014), How People Resolve ‘Legal’ Problems, PPSR (for the Legal Services Board), Cambridge.

Pleasence, P. and N.J. Balmer (2013), In Need of Advice: Findings of a Small Business Legal Needs Benchmarking Survey, PPSR (for the Legal Services Board), Cambridge.

Pleasence, P. and D. Macourt (2013), What Price Justice? Income and the Use of Lawyers, Law and Justice Foundation of New South Wales, Sydney.

Pleasence, P. and H. McDonald (2013), Crime in Context: Criminal Victimisation, Offending, Multiple Disadvantage and the Experience of Civil Law problems, Law and Justice Foundation of New South Wales, Sydney.

Pleasence, P., N.J. Balmer and C. Denvir (2015), How People Understand and Interact with the Law, Legal Education Foundation, London.

Pleasence, P., N.J. Balmer and C. Denvir (2014), Reshaping Legal Services: Building on the Evidence Base, Law and Justice Foundation of New South Wales, Sydney.

Pleasence, P., N.J. Balmer and S. Reimers (2011b), “What really drives advice seeking behaviour? Looking beyond the subject of legal disputes”, Oñati Socio-Legal Series, Vol. 1(6).

Pleasence, P., N.J. Balmer and R.L. Sandefur (2016), “Apples and oranges: An international comparison of the public’s experience of justiciable problems and the methodological issues affecting comparative study”, Journal of Empirical Legal Studies, Vol. 13(1), pp. 50-93.

Pleasence, P., N.J. Balmer and R.L. Sandefur (2013a), Paths to Justice: A Past, Present and Future Roadmap, Nuffield Foundation, London.

Pleasence, P., N.J. Balmer and T. Tam (2009), “Failure to recall,” in R.L. Sandefur (ed.), Access to Justice, Emerald, Bingley.

Pleasence, P., Z. Wei and C. Coumarelos (2013b), “Law and disorders: Illness/disability and the response to everyday problems involving the law”, Updating Justice, Vol. 30, Law and Justice Foundation of New South Wales, Sydney.

Pleasence, P., N.J. Balmer, A. Patel, A. Cleary, T. Huskinson and T. Cotton (2011a), Civil Justice in England and Wales: Report of Wave 1 of the English and Welsh Civil and Social Justice Survey, Legal Services Commission, London.

Pleasence, P., A. Buck, T. Goriely, J. Taylor, H. Perkins and H. Quirk (2001), Local Legal Need, London Legal Services Commission.

Pleasence, P., V. Kemp and N.J. Balmer (2011c), “The justice lottery? Police station advice 25 years on from PACE,” Criminal Law Review, Vol. 11, pp. 3-18.

Rasbash, J., F. Steele, W.J. Browne and H. Goldstein (2009), A User’s Guide to MLwiN, v2.26, Centre for Multilevel Modelling, University of Bristol, Bristol.

Rubin, D.C. and A.D. Baddeley (1989), “Telescoping is not time compression: A model,” Memory and Cognition, Vol. 17(6), pp. 653-661.

Sandefur, R.L. (2016), “What we know and need to know about the legal needs of the public”, Carolina Law Review, Vol. 67, pp. 443-459.

Schaeffer, N.C. and S. Presser (2003), “The science of asking questions”, in Annual Review of Sociology, Vol. 29, pp. 65-88.

Srbijanko, J.K., N. Korunovska and T. Maleska (2013), Legal Need Survey in the Republic of Macedonia: The Legal Problems People face and the Long Path to Justice, Open Society Foundation – Macedonia, Skopje.

Streiner, D.L., G.R. Norman and J. Cairney (2015), Health Measurement Scales, 5th edition, Oxford University Press, Oxford.

Subsecretaría de Acceso a la Justicia Ministerio de Justicia y Derechos Humanos (2017), Diagnosticode Necesidades Jurídicas Insatisfechas y Niveles de Acceso a la Justicia, Subsecretaría de Acceso a la Justicia Ministerio de Justicia y Derechos Humanos, Buenos Aires.

Torangeau, R., L.J. Rips and K. Rasinski (2000), The Psychology of Survey Response, Cambridge University Press, Cambridge.

United Nations Department of Economic and Social Affairs (2005), Household Sample Surveys in Developing and Transition Countries, United Nations, New York.

United Nations Office on Drugs and Crime and United Nations Economic Commission for Europe (2010), Manual on Victimization Surveys, United Nations, Geneva.

van den Bos, K., E.A. Lind and H.A.M. Wilke (2001), “The psychology of procedural and distributive justice viewed from the perspective of fairness heuristic theory”, in R. Cropanzano (ed.), Justice in the Workplace: From

Verdonschot, J.H., M. Barendrecht, L. Klaming and P. Kamminga (2008), “Measuring access to justice: The quality of outcomes”, TISCO Working Paper Series on Civil Law and Conflict Resolution Systems, Vol. 007/2008, Tilburg University, Tilburg.

Wagenaar, W.A. (1986), “My memory: A study of autobiographical memory over six years”, Cognitive Psychology, Vol. 18(2), pp. 225-252.

Williamson, A. and B. Hoggart (2005), “Pain: A review of three commonly used pain rating scales”, Journal of Clinical Nursing, Vol. 14(7), pp. 798-804.

Wolf, C., D. Joye, T.W. Smith and Y.-C. Fu (2016), The Sage Handbook of Survey Methodology, Sage Publications, London.


← 1. For general guidance see, for example, Groves et al. (2009) and Wolf et al. (2016). For guidance in the context of developing and transition countries see, for example, UN Department of Economic and Social Affairs (2005). For guidance in the context of justice see, for example, Himelein et al. (2010).

← 2. See, for example, Webster’s New World College Dictionary.

← 3. See, for example, Pleasence and Balmer (2012b) and Pleasence et al. (2015, 2017).

← 4. As Genn (1997, p. 159) states, “For many people the law is the criminal law.” Similar ideas were apparent in the findings of focus groups run in connection with the 2010 Legal Capacity of the Ukrainian Population survey. One legal expert is reported to have said that “During the Soviet period people felt ashamed of going to courts. They believed that courts only deal with criminals” (Kobzin 2011, p. 73).

← 5. 24 of 50 surveys for which details are available eschewed legal terminology in their introduction.

← 6. The difficulties involved in translating technical terms were evident in the case of the 2012 Georgian survey, about which it was observed, “at the initial stage of the survey, there was a problem of finding a Georgian term corresponding to the English ‘justiciable event’ that would be appropriate in the Georgian judiciary environment and clear to the public at large. After intensive consultations with experts, it was decided to use the term ‘სამართლებრივი პრობლემა’” (Institute of Social Studies and Analysis 2012, p. 53).

← 7. 35 of 51 surveys for which information is available.

← 8. For example, in the case of the English and Welsh Civil and Social Justice Survey, problem descriptions were reviewed by lawyers working in the relevant fields, as well as being subjected to cognitive testing. In the case of the earlier Paths to Justice surveys, additional qualitative research was also undertaken to explore “the terminology used by the public when referring to ‘justiciable events’” (Genn 1999, p. 16).

← 9. 51 surveys in total.

← 10. 49 of 51 surveys for which details are available.

← 11. Mirroring this, these problems have tended to be asked about only in jurisdictions with a relatively low gross domestic product (GDP) (at purchasing power parity) per capita, as detailed in the International Monetary Fund’s World Economic Outlook Database, October 2016. In the case of land, there were notable exceptions in the cases of Japan, New Zealand and the United Arab Emirates.

← 12. The 2012 Tajikistan also asked about justiciable problem experience without reference to any problem categories or examples.

← 13. Or even in a booklet setting out a full list of problems, as in the case of the 2017 iteration of the World Justice Project’s General Population Poll.

← 14. See, for example, Groves et al. (2009) and Schaeffer and Presser (2003).

← 15. This practice also limits multiple counting of single problems. To avoid double counting, surveys have often adopted the practice, when asking about a series of problem types, of requesting that only problems that have not already been mentioned should be reported. This approach can distort the relative reporting rates of different problems. If possible, a better approach is to specify problems carefully (as in Table ‎2.1) to avoid overlaps between categories. This also more clearly allows for different aspects of problem clusters to be reported.

← 16. Satisficing is linked to respondent ability, motivation and difficulty. Increasing length through a very large amount of repetitive problem types may decrease motivation/increase burden. This can reduce response rates and the completion of questionnaires, as well as lead to faster/shorter answering as the interview progresses (Galesic and Bosnjak 2009). In the context of legal needs surveys, randomisation of problem order (for problem identification) is prudent to counter resulting (and more general) order effects. Commencing follow-up questions only after all problems have been identified is similarly prudent in order to avoid flagging the effect of responses on questionnaire length. This also allows greater flexibility in sub-sampling

← 17. Although the questionnaire was later amended to include a more traditional list of problem types.

← 18. 32 of 52 surveys for which information is available.

← 19. In addition to introducing the "difficult to solve” filter, the Paths to Justice surveys also excluded from follow-up problems about which respondents said that they had taken no action whatsoever … because the problem had not been regarded as important enough to warrant any action” (Genn 1999, p. 14).

← 20. See, for example, Pleasence and Balmer (2013b).

← 21. See, for example, Pleasence and Balmer (2017).

← 22. The OECD’s Glossary of Statistical terms defines “own account” workers as “self-employed persons without paid employees”.

← 23. Gindling and Newhouse (2013, p. 15) report that 52 per cent of workers in low income countries, and just 9 per cent in high income countries, work on their own account. This is largely associated with work in agriculture, with less of a difference if agriculture is excluded (18 per cent versus 8 per cent). Across all countries, 33 per cent of workers work on their own account (16 per cent if agriculture is excluded). As Gindling and Newhouse went on to explain, “as per capita income increases, the structure of employment shifts rapidly, first out of agriculture into unsuccessful non-agricultural self-employment, and then mainly into non-agricultural wage employment”.

← 24. For example, an owned business, self-employment, professional practice or farming.

← 25. Evidently, for less common problem types, it will be less likely that reasonable estimates of the total incidence of personal and business related problems are possible.

← 26. See, for example, Pleasence and McDonald (2013) and Kemp et al. (2007).

← 27. In England and Wales, the 2014 Legal Problem Resolution Survey used a sample frame that drew on the previous year’s Crime Survey for England and Wales, thus enabling links between victimisation survey data and legal needs survey data (Franklyn et al. 2017).

← 28. United Nations Office on Drugs and Crime and United Nations Economic Commission for Europe (2010).

← 29. 42 of 49 surveys for which information is available.

← 30. Visual analogue scales were introduced by Hayes and Patterson (1921) but took many years to gain popularity (Streiner et al. 2015).

← 31. For example, in the measurement of anxiety (Davey et al. 2007) and particularly pain (e.g. Hjermstad et al. 2011).

← 32. Selected following the conduct of an online survey designed to explore the perceived relative seriousness of a broad range of short problem descriptions through a version of the VAS.

← 33. The reliability of a scale is related to the number of items included.

← 34. For example, Davey et al. (2007) proposed that either a single 5-point Likert scale or VAS could provide a simple, quick and adequate measure of anxiety when compared to the 20-item State-Trait Anxiety Inventory.

← 35. For example, the field of pain research (Hawker et al. 2011).

← 36. There have also been some instances of NRSs being more responsive than VASs and Verbal Rating Scales (Ferreira-Valente et al. 2011).

← 37. 43 of 54 surveys for which information is available.

← 38. While this form of cluster sampling allows for additional analyses of intra-household aspects on justiciable problem experience and problem resolving behaviour, this form of cluster sampling must be accounted for when conducting inferential statistical analyses. Some earlier surveys to adopt this data structure – including the original Paths to Justice survey – did not do so, resulting in the likelihood of underestimation of standard errors associated with model coefficients (Goldstein 2011, Rasbash et al. 2012). The appropriate approach for accounting for clustered data is to utilise multilevel models (Goldstein 2011, Rasbash et al. 2012).

← 39. In general, reference periods assume respondents can place events accurately in time, but dates are the hardest to remember with any precision (Wagenaar 1986). One common phenomenon is “telescoping” events into a reference period (i.e. they seem closer than they actually were), although backward telescoping out of a reference period is also common (Groves et al. 2009). As time passes (and as reference periods are longer) errors in both directions increase (Rubin and Baddeley 1989).

← 40. 15 of 49 surveys for which information is available.

← 41. 10 surveys.

← 42. 5 surveys.

← 43. 1 of 49 surveys for which information is available.

← 44. The need for terminology has occasionally been avoided in the case of list-based questions, by reference to “the following”.

← 45. Some jurisdictions, such as England and Wales, have relatively complex regulatory frameworks, reflected in some forms of segmentation. See, for example, Oxera Consulting Ltd. (2011).

← 46. Additional distinctions could be incorporated into Table ‎2.2 but have not been in this instance.

← 47. Although the 2006 New Zealand survey did ask about process in the context of outcomes, process would not necessarily have been identified in this manner.

← 48. Around three-quarters of surveys in each case.

← 49. Of the 45 surveys with relevant questions and for which details were available, 17 employed separate questions, 25 employed lists and 3 employed open questions.

← 50. 26 of 48 surveys for which details are available.

← 51. To adopt the language of Barendrecht et al. (2006).

← 52. 40 of 48 surveys for which details are available.

← 53. See, for example, Lind and Tyler (1988).

← 54. See, for example, Barendrecht et al. (2006, 2010). The project was directed to developing a methodology to measure the cost and quality of access to justice.

← 55. Verdonschot et al. (2008, pp. 7-8) summarise the potential criteria of distributive justice as being based on equity (proportionate to contribution), equality (equal shares), need (proportionate to individual needs), accountability (proportionate to volitional contribution) and efficiency (to maximise the welfare of the parties).

← 56. The survey included 21 questions focused on process, 9 relating to courts and 12 to conciliation.

← 57. The 2009 Bangladesh survey also notably included seven questions on process quality, including three on informational justice. It also asked about whether the community regarded the outcome as fair, although this would be beyond the purview of the respondent.

← 58. Just 1 survey for which details are available – the 2016 Moldovan survey – did not ask about costs in any way.

← 59. 45 of 47 surveys for which details are available.

← 60. 44 of 47 surveys for which details are available.

← 61. For reasons of sample efficiency, specificity and reliability, legal needs surveys are not well suited to establishing the typical or range of costs of different types of legal services. Thus, the Legal Services Board in England and Wales, the regulation authority for legal services, conducts supply side, in preference to demand side, research to establish these (Legal Services Board 2017).

← 62. Such questions were asked in, respectively, the 2015 Survey of Individuals’ Handling of Legal Issues Survey and 2012 Legal Services Benchmarking Survey, both commissioned by the Legal Services Board in England and Wales.

← 63. Overall, 24 of 45 surveys for which details are available asked how much was paid for legal services. In total, 30 surveys asked how much was paid for legal services or how much was paid/incurred in total to resolve problems.

← 64. And opportunity costs.

← 65. 22 of 45 surveys.

← 66. 12 surveys, comprising the 10 Justice Needs and Satisfaction Survey and the 2010 and 2012 English and Welsh Civil and Social Justice Surveys.

← 67. 13 surveys, comprising the Paths to Justice Surveys, the 2001 English and Welsh Civil and Social Justice Survey and the 10 Justice Needs and Satisfaction Surveys.

← 68. Important relationships were defined as “the respondent’s relations with family, friends, colleagues, employer/s etc.”

← 69. A 5-point Likert scale was used for responses throughout. Other surveys, such as the earlier Paths to Justice surveys used blunter dichotomous questions.

← 70. This is slightly different from asking about whether help or financial support was obtained from legal aid, which was marginally more common.

← 71. Pleasence et al. (2014) for a definition and discussion of the concept of legal capability.

← 72. See, above, n. 28.

← 73. Which could be termed the “prefigurative” dimension of legal consciousness (McCann 2006).

← 74. See, for example, Parle (2009), Collard et al. (2011), Coumarelos et al. (2012), Canadian National Action Committee on Access to Justice in Civil and Family Matters (2013), Pleasence et al. (2014), Pleasence and Balmer (forthcoming).

← 75. 45 of 47 surveys.

← 76. 22 and 14, respectively, of 47 surveys.

← 77. 36 of 47 surveys.

← 78. 18 of 47 surveys. Some general questions were asked about defined problem scenarios.

← 79. 4 of 47 surveys, along with one further survey that asked respondents whether they had adequate knowledge for decision-making. The 2016 Argentinian survey asked about knowledge in this way, although there was no reference to law or rights within the question. In all, three surveys have asked whether respondents felt they had sufficient knowledge to act.

← 80. The 2010 and 2012 English and Welsh Civil and Social Justice Surveys also quizzed respondents about their knowledge of law relating to four hypothetical scenarios and their own problems.

← 81. Again 4 surveys. Only the 2016 Moldovan survey asked about knowledge of law and processes.

← 82. It may also reflect the difficulty of assessing understanding of law, particularly in relation to identified problems. Levels of self-assessed legal understanding differ markedly from actual levels of understanding (Pleasence et al. 2015), and it is infeasible to test people on their understanding of a broad range of legal issues.

← 83. 8 of 47 surveys. Pleasence et al. (2011); Pleasence and Balmer (2014).

← 84. 19 surveys.

← 85. Although the Japanese question was not framed in terms of confidence, it asked “Do you think you could obtain the desired outcome if you informed the other party of your claim?” See Murayama (2007).

← 86. Defined by Gramatikov and Porter (2011, p. 169) as “the subjective self-belief that a person possesses |… in their] ability to mobilise the necessary resources, competencies, and energies to solve particular problems of a legal nature.” Thus, subjective legal empowerment is a domain specific form of self-efficacy. It was most notably defined by Bandura (1997, p. 3) as referring to “beliefs in one's capabilities to organise and execute the courses of action required to produce given attainments”.

← 87. As in the Justice Needs and Satisfaction Surveys; thus involving 12 rather than 6 questions (in general).

← 88. Based principally on Rasch analysis, which is used to ascertain whether questions in a group form the basis of an effective measure of a unidimensional domain (such as subjective legal empowerment) and, if so, to specify a scale. See, for example, Bond and Fox (2015).

← 89. For example, in relation to person separation and differential item functioning. Person separation refers to the extent to which a scale discriminates between high and low scoring individuals. Differential item functioning refers to the extent to which a question may address different abilities for different sub-groups.

← 90. Of particular relevance in the context of limited resources and access to justice policy centred on relative need.

← 91. Subsecretaría de Acceso a la Justicia (Ministerio de Justicia y Derechos Humanos) (2016, p. 7).

← 92. As Pleasence et al. (2001, p. 11) described the “background assumption” of much legal needs research in the 1960s and early 1970s.

← 93. Such as Lewis (1973), Marks (1976) and Griffiths (1980).

← 94. For example, the 1980 Hughes Commission (1980, paras. 2.09 and 2.10) influentially argued that legal need involves two distinct, staged, needs: the need for information about law and legal services to enable properly informed choices and the need for such support from legal services as is necessary if a legal solution is chosen: “In assessing the need for legal services, we must therefore think in terms of two stages - firstly enabling the client to identify and, if he judges it appropriate, to choose a legal solution; and secondly, enabling the client to pursue a chosen legal solution ..... When we speak of 'unmet need' we are concerned about instances where a citizen is unaware that he has a legal right, or where he would prefer to assert or defend a right but fails to do so for want of legal services of adequate quality or supply.”

The Hughes Commission’s definition of legal need centres on determining appropriate solutions by citizens in need. This reflects the Commission’s preference for “felt need” (defined by those in need) over “expressed need” (felt need that is acted upon), “normative need” (defined by experts) and “comparative need” (assessed by comparison of service use by those with similar characteristics), to use Bradshaw’s (1972) dominant taxonomy of social need. Aside from such a definition of legal need, there remain issues concerning the nature and extent of state responsibility to intervene to prevent individual needs going unmet, as described in the report of the 2005 Northern Irish survey (Dignan 2006, p. 4). Furthermore, given limited public resources, these issues must be considered alongside the effectiveness of services, citizens’ resources and prioritisation of needs. Attention must be given to a further dimension of need - relative need - and draw on the theories of the hierarchy of needs (as done, most famously, by Maslow (1943)). In practice, the prioritisation of legal needs may depend upon whether the responsibility to meet them is considered a constitutional matter (i.e. grounded in the rule of law) or a welfare matter (i.e. grounded in general welfare service provision).

← 95. See Legal Services Agency (2006), Ignite Research (2006).

← 96. See La Rota et al. (2012).

← 97. See Subsecretaría de Acceso a la Justicia (Ministerio de Justicia y Derechos Humanos) (2016).

← 98.  See Colombia’s Department of National Planning: http://dnpsig.maps.arcgis.com/apps/Cascade/index.html?appid=b92a7ab2fe6f4a06a6aec88581d6873e

← 99. A stark illustration of the impact of methodological change on findings is provided at n. 183.

← 100. Subject to an overall cap on problems followed-up that was rarely exceeded.

End of the section – Back to iLibrary publication page