The Evidence Portal

Glossary of terms

Key term Definition
Adverse effects When a program has a negative effect on client outcomes. For example, after participating in a program, participants were more likely to have high levels of stress.
Client outcomes The changes that occur for individuals, groups, families, or communities during or after an activity. Changes can include attitudes, values, or behaviours. 
Core components Program components that are common across evidence-informed programs. 
Direction of effect The direction of effect tells us what kind of effect a program has on client outcomes: positive, negative or neutral.
Dismantling studies Studies that identify the various components of a program and test the effectiveness of each component on its own.
Early intervention services Services that support vulnerable children, young people, families and their communities early in life and early in need to improve outcomes.

The ability of a program to produce the desired effect (i.e. improve client outcomes). 

It is used to describe the extent to which a program achieves statistically significant improvements in client outcomes.


A rigorous, systematic and objective process to assess a program’s effectiveness, efficiency, appropriateness and sustainability.

An evaluation measures the overall impact of a program or service and assesses whether it is the best way to achieve client and community outcomes. Evaluation plays a key role in supporting program decision making by helping us understand whether a program is working, in what context, when it’s not, and why.

Evidence Factual information, research or data used to support a claim or belief or to make a decision.
Evidence-informed program

A program that has been rigorously evaluated in a controlled setting and demonstrated effectiveness with specific population groups.

Note: The Evidence Portal describes programs as evidence-informed, rather than evidence-based, because we have not conducted systematic reviews of each program.

Evidence-informed practice

Using evidence to design, implement and improve our programs and services. This evidence can be:

  • research evidence
  • lived experience and client voice 
  • practitioner expertise. 
Evidence Rating Scale The scale in the Evidence Portal Technical Specifications used to rate evidence-informed programs.
Evidence review An evidence (or literature) review collates and summarises the available research evidence (e.g. peer-reviewed journal articles, reports) to answer a specific research question.
Intervention A program or activity that is conducted with a group of people to improve client outcomes.
Flexible activities Flexible activities are the different ways that core components can be implemented.  They can be adapted and tailored to the needs and desires of clients, the local service delivery context and the resources available to communities.
Meta-analysis Meta-analysis combines the results of many studies of the same program into a single evaluation.
Method How you collect or analyse information. For example, interviews, surveys.
Methodology The rationale for a research approach. It is a perspective taken on the research, which dictates how it is approached. For example, ethnography or critical theory.
Practice Evidence Clinical and subject matter expertise, insight and skills.
Pre- and post-test study Pre- and post-test studies collect data from participants before and after a service to monitor changes.
Program A set of activities managed together over a sustained period of time that aim to achieve client outcomes.
Program logic A tool or document that helps us link what we are doing with why we are doing it. It provides a framework for monitoring and evaluating our service activities against the outcomes we want to achieve.
Protective factors Attributes or conditions that can occur at individual, family, community or wider societal level and which moderate risk or adversity and promote healthy development and child and family wellbeing.
Qualitative research

Qualitative research collects and analyses non-numerical data (e.g. text, audio). 

It provides valuable in-depth information about people’s opinions and experiences. However, it may not represent the entire target group and it can’t measure effect.

Quantitative research

Quantitative research collects and analyses numerical data.  

It is used to collect often objective information (e.g. demographics), test causal relationships, make predictions and generalise results to wider populations.

Quasi-experimental design studies Quasi-experimental studies compare the outcomes of people who have received a service and people who haven’t. However, they don’t randomly allocate people to a treatment or control group as in a randomised control trial.
Randomised control trial A randomised control trial (RCT) compares people receiving a service (treatment group) to people who do not receive a service (control group) to see if there is a significant difference in their outcomes. RCTs randomly assign participants to a treatment group or control group. This means they have greater control over factors that might influence a person’s outcomes.
Research The systematic process of collecting and analysing data and information to generate new knowledge, answer a question or test a hypothesis.
Risk of bias

The likelihood that a study will give misleading results.

Studies with a high risk of bias can be inaccurate. They could find false positive effects or over-estimate the true effect of a program.

Risk factors Circumstances, conditions or events that increase the probability that a person will have poor outcomes in the future.
Single study An individual or primary research study.
Statistically significant

A statistically significant result is a result that is not attributed to chance. For example, the difference in outcomes between people who received a program (intervention group) and people who did not (control group).

If a result is not statistically significant, it means the difference is likely due to chance.

Study design The type of study or research that was undertaken. For example, systematic review, randomised control trial.
Systematic review A type of evidence review that uses an explicit and reproduceable method to systematically search, critically appraise and synthesise the findings of multiple studies on a specific topic. They are designed to provide a complete, exhaustive summary of current evidence relevant to a research question.
Target group

The group of people a program has been designed for


The group of people a program has been shown to be effective with.

Technical Specifications The document which describes the method researchers must follow to conduct an evidence review for the Evidence Portal. It provides guidance, explanations and examples to ensure the process is applied consistently.
Last updated:

07 Mar 2022

Was this content useful?
We will use your rating to help improve the site.
Please don't include personal or financial information here
Please don't include personal or financial information here

We acknowledge Aboriginal people as the First Nations Peoples of NSW and pay our respects to Elders past, present, and future. 

Informed by lessons of the past, Department of Communities and Justice is improving how we work with Aboriginal people and communities. We listen and learn from the knowledge, strength and resilience of Stolen Generations Survivors, Aboriginal Elders and Aboriginal communities.

You can access our apology to the Stolen Generations.

Top Return to top of page Top