Applied Behavior Analysis Blog

Dr. Katherine May Dr. Katherine May

3 Levels of Scientific Understanding

3 Levels of Scientific Understanding: Description, Prediction & Control.

Description:

Description is the first level of scientific understanding. Description is made possible through observations that can be classified and quantified. When observing a behavior, a practitioner can then describe what the behavior looks like. This is essential for further scientific understanding.

An example of this is if the practitioner observes little Suzie wave to her friend after hearing her name called, then the practitioner can then report that the behavior of waving was followed by someone calling her name. Using ABC contingencies to observe behavior leads to detailed descriptions of the behavior and what it looks like.

Prediction:

After multiple observations of a behavior we can confidently predict that the behavior will happen again in the future under the same conditions as the past behavior. Prediction is also referred to as covariation and correlation but NOT causation. Although past behavior can predict future behavior it is not to be presumed that one behavior causes the next.

For example, if Sally is late to school every Wednesday one can confidently predict that the next upcoming Wednesday she will be late to school. 

Control:

Control is also referred to causation and is the third and highest level of scientific understanding. Control is established when there is a functional relationship formed. In other words, manipulating one event results in change in the next event.

Control is evident when a treatment has a direct effect on a behavior. For example, if Teddy takes an Advil when he has a headache and the headache goes away. There is a direct causation between the Advil removing the headache, in which the IV in had an effect on the DV and a functional relation is evident.


Read More
Dr. Katherine May Dr. Katherine May

Differential Reinforcement

Differential reinforcement consists of two components: Reinforcing the appropriate behavior. Withholding reinforcement of the inappropriate behavior.

In this brief blog article, we will examine the different types of differential reinforcement. 

This article is useful for registered behavior technicians (RBT) or students who are studying to become board certified behavior analysts (BCBAs). Understanding (Applied Behavior Analysis) ABA terms is critical for both being an effective ABA therapist and passing your BCBA exam.

In an effort to help you study for your BCBA exam more effectively, this post is written in a “study note” form rather than as a long form blog post. They are my personal study notes I am sharing with you as a gift. I am spending my time studying so they are not edited. I am grateful for your understanding in overlooking the grammar!  Happy Studying! 

Differential reinforcement is a technique used when using an extinction procedure. It refers to reinforcing a different procedure by redirecting someone to perform a different response. In other terms you withhold reinforcement for one behavior and provide reinforcement for another. For example, you will reinforce a child raising his hand in class but not calling out. It can also be used to teach discrimination. For example, you tell the child to point to a dog and you reinforce him when he points to a dog but not to a picture of a cat.

There are several types of differential reinforcement:

Differential Reinforcement of an Alternative Behavior (DRA):

Definition of DRA:

Differential reinforcement is an ABA technique in which you put a behavior on extinction and instead reinforce and teach a functionally equivalent replacement behavior. The goal is that the replacement behavior will eventually replace the problem behavior. 

You would use DRA when:

You would use DRA when you are trying to provide access to the same reinforcer that a challenging behavior would have previously reinforced.

Examples of DRA:  

Example 1) Kim will scream and yell when she wants a cookie. Screaming is maintained by socially positive reinforcement in the form of access to tangible. If using differential reinforcement of alternative behavior, you would not provide Kim access to a cookie when she screams and instead teach her another way to get the cookie such as asking nicely. This is DRA because asking nicely provides Kim another way to get the cookie without the undesirable behavior of screaming being reinforced with the goal of asking nicely replacing screaming. 

Example 2) Kevin walks up to other girls in class and makes inappropriate remarks about their body. It is determined that this behavior is maintained by socially positive reinforcement in the form of access to attention. Kevin is taught to tell girls jokes as a functionally equivalent replacement for making inappropriate remarks about their body. Kevin will use jokes instead of inappropriate remarks to get the attention he seeks.

Example 3)  Kristin likes to clap her hands during class. This is very disturbing to her peers and in order for her to be able to stay in her current classroom setting, this behavior must be  reduced. It is determined that this behavior is being maintained by automatic positive reinforcement and that Kristin is craving a sensory sensation by clapping her hands. Kristin is provided with a fidget spinner to use in class instead of clapping her hands. The fidget spinner provides her with the same reinforcement as clapping her hands and is not disturbing to her peers. 

Differential Reinforcement of Other Behavior (DRO):

Definition of DRO: Differential reinforcement refers to reinforcing the absence of a behavior during a given time interval. 

You would use DRO when: 

Differential reinforcement is typically used to decrease a behavior.  This is typically used only for dangerous behaviors that occur at a very high rate that must be extinguished. You would generally not want to use DRO first as it does not teach a replacement behavior and you can inadvertently reinforce other undesirable behaviors by reinforcing any behavior but the target behavior. This behavior should typically only be used for dangerous behaviors such as aggression or self injury. There are two serious advantages to DRO though. For one, it is very easy to use for teachers and parents. Two, you are working on the behavior indirectly since you are reinforcing its absence which is important since you do not want a dangerous behavior to occur. 

Examples of DRO:  

John frequently engages in biting his therapist as an escape behavior. This behavior is dangerous and occurs at a high rate. John’s therapist sets a timer and if he does not bite during the one minute interval he gets a token. 

During recess, Jennifer will frequently run up to children and push them, laugh and walk away. This occurs fairly regularly every day during recess. The parents of other kids are starting to complain and the teacher frequently cannot let Jennifer participate in recess. Snack occurs right after recess. The school’s BCBA recommends that her teacher provide Jennifer with a special snack if she does not engage in pushing her friends outside of recess. 

Jillian will often bite her hand when she is upset. This occurs frequently in therapy sessions. She is very highly reinforced by youtube videos. For every five minutes that she does not engage in biting herself she is allowed to watch one minute of a youtube video. If she bites herself, her therapist restarts the timer. 

Differential Reinforcement of Incompatible Behavior (DRI)

Definition of DRI: 

Differential reinforcement of an incompatible behavior is when you put one behavior on extinction and instead teach a replacement behavior that is impossible for the child to engage in at the same time. Most times when DRI is used, it is usually also a DRA because you should almost always ensure that you are teaching a functionally equivalent replacement behavior.  An exception would be for automatically reinforced dangerous behaviors for which there is no safe functionally equivalent replacement behavior or when the reinforced behavior for non compliance is compliance. 

You would use DRI when:

You should use DRI when you are trying to reduce a behavior by providing the child with an alternative activity that they cannot engage in at the same time.

Examples of DRI:  

Example 1) Brian frequently engages in skin picking. He will use his right hand to pick the skin off of his left fingers which has resulted in bleeding and infections. Brian likes fidget spinners. His therapist gives him a fidget spinner and reinforcers using the fidget spinner instead of picking. This is DRI because Brian can’t use the fidget spinner and pick his skin at the same time.

Example 2) Beverly frequently bites her therapist. A functional behavior assessment determines that this behavior is reinforced by automatic positive reinforcement. Beverly is provided with wearable chewable jewelry and is taught to buy the jewelry instead. This is DRI because she cannot bite her therapist and the jewelry at the same time. Note that this is also a DRA procedure because the two behaviors serve the same function. Technically, if the two behaviors can’t be performed at the same time, it is a DRI but this example clearly illustrates there can be a lot of overlap. 

Example 3) Billy frequently gets out of his seat during class. When he stands up his teacher tells him to sit down and then praises him for sitting nicely. This is a DRI because Billy can’t be sitting in his seat and be out of his seat at the same time. However, in this example, the two behaviors are not functionally equivalent.  This is an example of when you are using DRI to reinforce the opposite behavior of the behavior on extinction. 

Differential reinforcement of lower rates of behavior (DRL):

Definition of DRL:

DRL is used when you reinforce a behavior when it occurs under a predetermined number of times in a time period. 

You would use DRL when:

You would use DRL when a behavior is acceptable but it occurs at a rate that is too high. The goal of the DRL would not be to eliminate or replace the behavior but lower the rate that it occurs. The goal is to increase the inter response time between occurrences of the behavior. More often than not, the rate reinforced is set based upon the baseline and the criteria changes as the behavior starts to decrease. 

There are three different types of DRL:

Full Session DRL: 

Full session DRL is when you provide reinforcement when a behavior occurs less than a predetermined number of times in an entire treatment session. This is an effective strategy for teachers because it is easy to implement. It does require a student to be able to wait until the end of a treatment session to gain access to reinforcement. 

Example of Full Session DRL) Alyssa constantly asks for help with every problem on a 10 question worksheet. This has created a dependence on her teacher. Alyssa is told she can only ask for help 5 times per worksheet. Eventually, this is reduced to 3 times and 1 time. Alyssa eventually learns to complete her worksheets without excessively asking for help and gains confidence and independence.

Example 3

Interval DRL:

Interval DRL is when you break a treatment session into equal intervals.  Reinforcement is delivered at the end of the interval if the behavior occurred under a predetermined number of times. As soon as a behavior exceeds the occurrence limit, the interval is reset. Interval DRL is more work to implement but it provides the learner more frequent access to reinforcement. 


Example of interval DRL Alexander gets up frequently during class. He will get up at least 10 times per hour. It is okay for Alexander to get up during class but his teacher believes that he is getting up too often. Alexander’s teacher tells him that he is only allowed to get up 5 times per hour during class. When Alexander gets up five or fewer times per hour, he receives a sticker.

Spaced Responding DRL:

Spaced responding DRL is when you provide reinforcement when the interresponse time of a behavior is greater than a minimum specified amount of time. This is the most effective DRL procedure for making sure that a behavior is reduced and not eliminated.  Like interval DRL, spaced responding DRL also provides a learner with more frequent access to reinforcement. It is the only DRL procedure that provides reinforcement immediately after a behavior occurs. With full session and interval DRL, reinforcement could be obtained if the rate of behavior is 0. 

Example of Spaced Responding DRL.) Amy raises her question every single time the teacher asks a question and gets frustrated when her teacher does not call on her. Amy’s teacher is excited that Amy wants to participate in class and does not want her to stop raising her hand but wants to reduce her hand raising to a rate that is commensurate with the other students in the class. Amy is told that once she raises her hand, she must wait five minutes before she can raise her hand again. If Amy waits at least five minutes, after raising her hand, her teacher immediately calls on her and tells her she did a good job waiting to raise her hand. If she raises her hand before the five minutes is up, Amy’s teacher does not call on her and reminds her she must wait five minutes to raise her hand. Each time Amy raises her hand before five minutes is upm her teachers resets the interval. 

Differential reinforcement of higher rates of behavior (DRH):

Definition of DRH: DRL is used when you reinforce a behavior when it occurs over a predetermined number of times in a time period. 

You would use DRH when: 

You would use DRH when you want to increase a behavior, usually that a child knows how to do, that occurs at a high rate but there is no behavior that you want to decrease. For example, manding (requesting). The goal of DRH is to decrease the interresponse time between occurrences of the behavior. 

Examples of DRH:

Example 1) 

Paul knows how to make his bed but he frequently forgets. It is important to his parents that he makes his bed every day. Right now Paul independently makes his bed 3 days per week. His parents use DRH and tell Paul he can go to the movies on Friday night if he makes his bed at least 4 times in a week.

Example 2) 

Peter is very shy. He usually knows the answers to questions in class but rarely raises his hand. Currently, he only raises his hand on average once per day. His teacher has a classroom store and students can earn coupons to redeem for items in the classroom store. His teacher tells Peter that if he raises his hand at least three times per day, he will earn a coupon.

Example 3) 

Paola is learning to ask for what she wants using a manding program designed by her BCBA to establish functional communication. Paola is verbal and knows how to ask for what she wants but she is often quiet, withdrawn and will not use her words to get her needs met. Her BCBA has determined that currently in a half hour she only asks for what she wants about 5 times per half hour. Her BCBA tells Paola that she can earn 5 minutes of iPad time after a half hour is over if she uses her words to get her needs met at least 10 times.


Read More
Tact Extensions Dr. Katherine May Tact Extensions Dr. Katherine May

Tact Extensions

Tact extensions allow to label stimuli in various ways!

Tact Extensions: There Are Many Ways To Label Something!

Tact Extensions:

Once a tact has been established, the tact response can occur under novel stimulus conditions through the process of stimulus generalization. In other words, there are many ways to label one stimuli. Skinner (1957) identifies four different levels of generalization based on the degree to which novel stimuli share the relevant or defining features of the original stimulus. These four types of tact extensions are generic, metonymical, solistic, and metaphorical.

Generic Tact Extension: A tact evoked by a novel stimulus that shares all of the relevant or defining features associated with the original stimulus. Example: For generic tact extensions it's easiest to think of stimulus generalization. An example would be seeing a Dunkin Donuts and then seeing Starbucks and labeling them both a coffee shop. This would be one response for both stimuli.

Metonymical Tact Extension: A tact evoked by a novel stimulus that shares none of the relevant features of the original stimulus configuration, but some irrelevant yet related feature has acquired stimulus control. Some examples include seeing an empty cup and saying “water” or seeing a black cat and later calling black kettle “cat.” In these scenarios the client made some associations between the two stimuli even though they do not share any of the relevant features of the original stimulus.

Solistic Tact Extension: A verbal response evoked by a stimulus property that is only indirectly related to the proper tact relation. An example of this would be the use of “slang” language. For example if you walk into someone and say “Oh Goodness, I am so sorry,” and they respond back, “you good” instead of saying “that's okay” or “no problem.”  Another example would be saying something along the lines of “I ain’t got no time for that.” This is slang language which is used in place of proper language to relay the same meaning.

Metaphorical Tact Extension: A tact evoked by a novel stimulus that shares some, but not all, of the relevant features of the original stimulus. Essentially this simply means using metaphors. Some examples would be saying a “test was as easy as pie” or “time is money.” 


Cooper, J. O., Heron, T. E., & Heward, W. L. (2019). Applied Behavior Analysis (3rd Edition). Hoboken, NJ: Pearson Education.

Read More
Applied Behavior Analysis Dr. Katherine May Applied Behavior Analysis Dr. Katherine May

Restitutional Overcorrection

Restitutional overcorrection is a positive punishment procedure in which contingent on problem behavior the client is required to restore the environment back to its original state and then make it even better than it was before.

Restitutional Overcorrection: A Positive Punishment Procedure

As avid learners of ABA, we are quick to find out that there are a lot of terms to remember, but on top of that it’s important to be able to associate terms with a bigger category. Well, here’s one of them….restitutional overcorrection falls under the category of positive punishment. This is because the punishing agent is adding something in order to decrease the future frequency of problem behavior. 

In restitutional overcorrection contingent on the problem behavior, the learner is required to repair the damage caused by the problem behavior. Therefore, the client needs to restore the environment back to its original state first, then the client is required to engage in additional behavior to improve the environment to a state better than it was before the problem behavior occurred. I always remember restitutional overcorrection as a person not being able to “rest” because they are required to do MORE than just restore the environment.

An example of restitutional overcorrection is when a kiddo comes home and runs right up the stairs and tracks mud all through the house, his parents now have him clean up the mud, THEN wax all the floors and polish the windows. In this scenario the mud was the only thing the kiddo was responsible for was tracking mud, but he was required to bring the environment not only back to its original state, but now to an even better state than it was before.

Cooper, J. O., Heron, T. E., & Heward, W. L. (2019). Applied Behavior Analysis (3rd Edition). Hoboken, NJ: Pearson Education.


Read More
Dr. Katherine May Dr. Katherine May

Behavior Traps

Behavior traps are defined as interrelated community of contingencies of reinforcement that can be especially powerful producing substantial and long lasting behavior changes.

What is a behavior trap and how do you use it effectively?

Behavior traps are powerful contingencies of reinforcement with four defining features:

  1. Clients are “baited” with virtually irresistible reinforcers

  2. Only a low effort response already in the clients repertoire is needed to enter the trap

  3. Interrelated contingencies of reinforcement inside the trap motivate the student to acquire, extend, and maintain targeted skills

  4. Traps can remain effective for a long period of time

Behavior traps are a wonderful way to have clients acquire new skills and also are very effective at promoting generalization. 

Alber and Heward (1996) outlined these five steps to design and use “behavior traps!” I highly suggest you take a look over this article (referenced below) to find out more about behavior traps and the many different strategies to create and implement them.

Identify your prey: What academic/social areas does the student need the most help? Be sure to target behaviors that are relevant, functional, and behaviors that lend themselves to frequent practice opportunities.

Find powerful bait: What does the student like? Watch them when they’re alone or simply by asking the student and/or their parents and provide a variety for them to sample.

Set the trap: Place desired materials in the student’s path. 

Maintain your trap: Start small. Use variety and give your trap a break periodically.

Appraise your catch: Assess the changes in the targeted skills frequently and directly. Make modifications or set another trap if ineffective.

Cooper identifies a behavior trap while referring to catching a mouse in your house. There are many ways to catch a mouse in your house; you can chase the mouse with your hands, catch it with a net, OR you can put out an irresistible slice of cheese. Since the cheese is an especially powerful reinforcer for the mouse, the mouse will naturally be “baited” and “lured” out of hiding to go get the cheese. Essentially, when implementing behavior traps we are doing the same thing with our kiddos, but we use this strategy to teach them and prepare them to generalize skills!

When I was teaching a client of mine, his absolute favorite food was pizza. So to create and implement a behavior trap I integrated pizza into my teaching strategies. As I was teaching fractions, I allowed my client a 1/2 slice of pizza once he learned how much ½ represented, then 1/3, 1/4  and so forth. This was extremely successful and he later was able to teach his peers how to use fractions. Not only did the skill remain in his repertoire but it also generalized across people and settings.

So go find your “prey”, your bait… and go set that trap!! Future behaviors are waiting on YOU!

References

Alber, S. R., & Heward, W. L. (1996). “Gotcha!” Twenty-five behavior traps guaranteed to extend your students’ academic and social skills. Intervention in school and clinic, 31 (5), 285-289. 

Baer, D. M., & Wolf, M. M. (1970). The entry into natural communicates of reinforcement. In R. Ulrich, T. Stachnick, & J. Mabry (Eds.), Control of human behavior (pp. 319-324). Glenview, IL: Scott, Foresman.

Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied Behavior Analysis (3rd Edition). Hoboken, NJ: Pearson Education.


Read More
Dr. Katherine May Dr. Katherine May

EO vs. AO


Establishing Operations vs. Abolishing Operations


Establishing operations are motivating operations that momentarily increase the effectiveness of some stimulus, object or event as a reinforcer. For example,food deprivation establishes food as an effective reinforcer. An abolishing operations do the opposite as they are motivating operations that momentarily decrease the effectiveness of some stimulus, object or event as a reinforcer.  For example, food satiation establishes food as not an effective reinforcer. In other words; when you're full you're not going to find food very motivating.


For example: If there is a glass of water on Mrs. Miller’s desk that untouched glass of water serves as an SD which simply means that it is signifying  that reinforcement is available (if she so chooses to drink it). As she keeps lecturing to her students, she begins to get parched (state of deprivation for water) she then has an EO to pick up the water and drink it (now becomes a MO) once she is no longer thirsty (satiated from the water) which she then has an AO to put the glass of water down which it resumes its role as an SD and is no longer an MO.

Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied Behavior Analysis (3rd Edition). Hoboken, NJ: Pearson Education.


Read More
Dr. Katherine May Dr. Katherine May

Sequence Effects vs. Multiple Treatment Interference

Sequence effects are when affects from a previous condition affect the IV in the following condition whereas Multiple Treatment Interference is when one treatment conflicts with another when they are being presented at the same time.

Overview: Sequence effects occur when the effects of an intervention from one condition carry over into the next condition. Sequence effects are typically the result of a multiple treatment reversal design or a B-A-B reversal design. Sequence effects can skew the data in the following condition because the data is not going to display an accurate depiction of what is really happening. In order to reduce this you would need to continue to take data until the sequence effects subside. The experimental design used to minimize sequence effects would be an alternating treatment design where all treatments are running independently and simultaneously. It is important to note that minimizing sequence effects is an advantage of using an alternating treatment design whereas a disadvantage of using an alternating treatment design would be multiple treatment interference. This is always an issue with alternating treatment designs because of the unnatural nature of switching between multiple treatments. Multiple treatment interference is when the effects of one treatment on a subject's behavior are being confounding by the influence of another treatment administered in the same study. This means that one treatment is affecting the other as the treatments are being alternated, not from a previous condition but instead, during the process of alternating or switching between 2-4 behaviors.

The main difference: As stated above, the main difference between the two concepts is that with sequence effects are when the effects of an intervention are affecting the next condition whereas with multiple treatment interference the effects of one treatment on a subject's behavior are being confounding (so let’s say confused) by the influence of another treatment. In simpler terms; sequence effects are when a treatment is affected by a previous condition and multiple treatment interference is when the treatments are affected by interference between treatments. With multiple treatment interference the treatments are interfering with each other and thus making it difficult to see which intervention is the most effective. Furthermore; sequence effects are specific to  reversal designs while multiple treatment interference is specific to alternating treatment designs.

Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied Behavior Analysis (3rd Edition). Hoboken, NJ: Pearson Education.



Read More
Dr. Katherine May Dr. Katherine May

Automatic Reinforcement

Automatic Reinforcement is reinforcement that is not socially mediated by others

Automatic Reinforcement: Auto = Self!

Automatic reinforcement refers to reinforcement that occurs independent of the social mediation of others. Response products that function as automatic reinforcement are often in the form of a naturally produced sensory consequence that, “sound good, looks good, tastes good, smells good, feels good to touch, or the movements itself is good.” (Rincover, 1981; Cooper, Heron, and Heward, 2019). An example would be scratching an insect bite to relieve the itchy sensation you are feeling. With automatic reinforcement, the person is able to reinforce themselves. Some examples are; scratching an itch, cracking knuckles, watching the lights go on and off, eating your favorite cookies, etc.

If another person does not play a role in the function of the behavior then the behavior would be automatically reinforced. However; if another person does play a role in the function of the behavior then this would be considered socially mediated reinforcement. For example if the function of little Kimmy’s behavior is to seek her mothers attention and her mother gives it to her this would be socially-mediated reinforcement but if the function of little Kimmy’s behavior is to get access to a cookie simply because she loves how it tastes and she retrieves it herself, this would be automatic reinforcement.

Another example describing automatic reinforcement would be to turn on the radio yourself as opposed to asking your pal to do it for you. If you had a pal turn the radio on for you this would be socially-mediated positive reinforcement instead of automatic reinforcement.

Practitioners can also determine if a behavior is automatically reinforced when a behavior persists in the absence of any known reinforcer. For example; instances of self-stimulatory behaviors or stereotypy. This is because these behaviors can produce sensory stimuli that function as reinforcement….automatic reinforcement that is!


Cooper, J. O., Heron, T. E., & Heward, W. L. (2019). Applied Behavior Analysis (3rd Edition). Hoboken, NJ: Pearson Education.

Vaughan, M. E., & Michael, J. L. (1982). Automatic Reinforcement: An Important but Ignored Concept. Behaviorism, 10(2), 217–227. 


Read More
Dr. Katherine May Dr. Katherine May

Functional Analysis

Functional Analysis is a part of the FBA process in which a practitioner manipulates antecedents and consequences in the clients environment to determine the function of challenging behavior.

Functional Analysis: Find The Function Of The Challenging Behavior!

A functional analysis is the part of the Functional Behavior Assessment (FBA) process that confirms the hypothesis as to what the function of the challenging behavior is. This is the hardest part of the FBA as it yields the most information and consists of experimentally manipulating the environment to test for the function of the challenging behavior. Antecedents and consequences representing those in the client's natural environment are arranged so that their separate effects on problem behavior can be observed and measured (Cooper, Heward & Heron, 2007). In order to do this, the practitioner implements a procedure that sets up specific conditions based on the following four conditions; play, escape, attention and alone condition. By determining which condition produces the highest frequency of behavior, practitioners can be confident that this condition serves as the function of the challenging behavior.

Running the Conditions:

During an FA, the practitioner is testing one condition at a time.

Attention Condition: In the attention condition there is only the practitioner and the client in the room. The practitioner will only give the client attention when the client engages in the challenging behavior. Then attention is removed. This allows the practitioner to determine if the challenging behavior is contingent on access to attention.

Escape Condition: In the escape condition, there is only the practitioner and the client in the room. The practitioner will deliver task demands, when problem behavior occurs the demands are removed. This allows the practitioner to determine if the challenging behavior is contingent on access to escaping demands.

Play Condition: The client is permitted to play. This is the control condition, in which the problem behavior is low because reinforcement is freely available and no demands are placed on the client. 

Alone Condition: The practitioner and the client are alone in the room as the client is engaging in an activity. The practitioner is to give no attention or demands to the client to ensure that the behaviors being observed are not being reinforced by social mediation of others.

If problem behavior occurs frequently in all conditions, or is variable across conditions, responding is considered undifferentiated in which results are inconclusive and the function of the problem behavior is automatic reinforcement or cannot be determined.


Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied Behavior Analysis (3rd Edition). Hoboken, NJ: Pearson Education.

Hanley, G. P., Iwata, B. A., & McCord, B. E. (2003). Functional analysis of problem behavior: a review. Journal of applied behavior analysis, 36(2), 147–185. 


Read More
Dr. Katherine May Dr. Katherine May

Compound Schedules of Reinforcement

Compound Schedules of Reinforcement are schedules of reinforcement consisting of 2 or more elements of continuous reinforcement, the 4 intermittent schedules of reinforcement, differential reinforcement of various rates of responding and extinction.

Compound Schedules of Reinforcement: Defined and Applied

In Applied Behavior Analysis practitioners can combine two or more basic schedules of reinforcement to form compound schedules of reinforcement. These schedules consist of continuous reinforcement, intermittent schedules of reinforcement, differential reinforcement of various rates of responding and extinction. It is important to note that basic compound schedules can occur simultaneously or successively and can occur with or without an SD.

There are various types of compound schedules of reinforcement, continue reading below to find out more:

Multiple Schedule of Reinforcement: This is when there are two or more schedules of reinforcement for one behavior that are each presented with different discriminative stimuli. For example a third grade kiddo, Jake, was working on his multiplication facts. When he worked with his math teacher he was required to get 12/20 multiplication facts correct to receive reinforcement but when he was working with his math tutor he had  to get 17/20 correct to receive reinforcement. Therefore; the schedule of reinforcement was dependent on which person he was working with (the SD). He could either get reinforcement on an FR 12 or FR 17 schedule based on which SD was present.

Mixed Schedule of Reinforcement: This is when two or more schedules of reinforcement for one behavior are each presented without any discriminative stimulus. Therefore; the reinforcement is delivered in a random order in which the client does not know when they will be reinforced. This maintains that the client's behavior will continue to occur at a high rate. For example, Leslie was working on eating her vegetables with the BCBA, Thomas. Leslie sometimes received reinforcement for eating a spoon full of vegetables, she sometimes received reinforcement for taking 5 bites of her vegetables. The kiddo does not know which schedule of reinforcement is in effect at any given time so her behavior will continue to occur at a high rate.

Chained Schedule of Reinforcement: This compound schedule of reinforcement has two or more basic schedule requirements that occur successively, and have a discriminative stimulus correlated with each schedule. This schedule always occurs in a specific order and the first behavior expectation serves as a discriminative stimulus for the next behavior expectation, and so on, For example, when my recipe box gets delivered to my house every Tuesday, I follow the recipe card (the SD) placing one ingredient in the pot after the next in the specific order that the recipe card demonstrates. In addition, I complete this chain in about 20-30 minutes.

Tandem Schedule of Reinforcement: This compound schedule of reinforcement is the same exact reinforcement schedule as chained however; there is no discriminative stimulus associated with it. Therefore; there is no specific order associated with this schedule. For example, the following week I received my recipe box and this time they forgot to include the recipe card. I am left to figure out the recipe myself. I still have to put the food in the pot in some order to cook the food, just not a specific order. In addition, I complete this recipe in about 20-25 minutes. The trick with tandem schedules of reinforcement is that the behaviors still occur in an order; however it can be ANY order rather than a specified order.

Concurrent Schedule of Reinforcement: This compound schedule of reinforcement consists of two or more schedules of reinforcement, each with a correlated discriminative stimulus, operating independently and simultaneously for two or more behaviors. Concurrent schedules of reinforcement allow the client to have a choice which is essentially governed by the matching law. The matching law states that “behavior goes where reinforcement flows.” This means that the schedule associated with the stronger reinforcement will occasion the behavior to engage in that schedule of reinforcement. For example, if I offer my client the reinforcement of getting a half hour of video game playing if he sits with me in the lunchroom, or an hour of video game playing if he socializes and sits with his peers in the lunchroom (terminal behavior), my client is going to choose to socialize and sit with his peers (even if this is not his preferred activity) because he wants to engage in the behavior that will grant him the stronger reinforcer (1 hour of video games vs. a ½ hour).

Conjunctive Schedule of Reinforcement: This compound schedule of reinforcement is when reinforcement follows the completion of two or more simultaneous schedules of reinforcement. For example, little Nancy must work on her math homework for five minutes and get 10 questions correct in order to receive reinforcement.

Alternative Schedule of Reinforcement: This compound schedule of reinforcement is when reinforcement follows the completion of either or schedule. This schedule consists of two or more simultaneously available component schedules. the client will receive reinforcement when they reach the criterion for either schedule of reinforcement. For example, a client of mine is currently working on an alternative schedule where he can either work quietly in his seat for five minutes, or he can complete five math problems. He receives reinforcement contingent on reaching the criterion for either one; it does not matter which schedule he meets as long as he meets one or the other. The first one completed provides reinforcement, regardless of which schedule component is met first.

Adjunctive Behaviors (schedule-induced behaviors): Behaviors that come about when compound schedules of reinforcement are in place. These behaviors come about when reinforcement is not likely to be delivered. When a kiddo is waiting to get reinforced they fill-in their time with another irrelevant behavior. So in the meantime he/she might doodle on their pad or pop her bubble gum. These are considered time-filling or schedule-induced  behaviors.


Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied Behavior Analysis (3rd Edition). Hoboken, NJ: Pearson Education.


Read More
Dr. Katherine May Dr. Katherine May

Component Analysis

A Component Analysis is any experimental design to identify the active elements of a treatment condition, the relative contributions of different variables in a treatment package, and/or the necessary/sufficient components of an intervention.

Component Analysis: Explained

When a behavior intervention consists of multiple components (treatment/behavioral package) and the practitioner manipulates each component to see which one is most effective for the client. A component analysis is an experimental design to identify the active elements of a treatment package, the relative contributions of different components in a treatment package, and/or the necessity and sufficiency of treatment components (Cooper, Heron & Heward, 2007).

Essentially, a component analysis is when a practitioner is analyzing a treatment package to determine which treatment is most effective by analyzing which treatment is most efficiently affecting the dependent variable. A component analysis attempts to determine which part of an independent variable is responsible for behavior change. There’s one golden rule: change only variable at a time.

There are two methods for conducting component analyses; an add-in component analysis and a drop-out component analysis:

  1. Add-in Component Analysis: Components are assessed individually or in combination before the complete treatment package is presented. This method can identify sufficient components.

  2. Drop-out Component Analysis: The experimenter presents the treatment package and then systematically removes components. If the treatment's effectiveness wanes when a component is removed then the experimenter has identified a necessary component.

Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied Behavior Analysis (3rd Edition). Hoboken, NJ: Pearson Education.


Read More