Archives

  • 2018-07
  • 2018-10
  • 2018-11
  • 2019-04
  • 2019-05
  • 2019-06
  • 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2019-11
  • 2019-12
  • 2020-01
  • 2020-02
  • 2020-03
  • 2020-04
  • 2020-05
  • 2020-06
  • 2020-07
  • 2020-08
  • 2020-09
  • 2020-10
  • 2020-11
  • 2020-12
  • 2021-01
  • 2021-02
  • 2021-03
  • 2021-04
  • 2021-05
  • 2021-06
  • 2021-07
  • 2021-08
  • 2021-09
  • 2021-10
  • 2021-11
  • 2021-12
  • 2022-01
  • 2022-02
  • 2022-03
  • 2022-04
  • 2022-05
  • 2022-06
  • 2022-07
  • 2022-08
  • 2022-09
  • 2022-10
  • 2022-11
  • 2022-12
  • 2023-01
  • 2023-02
  • 2023-03
  • 2023-04
  • 2023-05
  • 2023-06
  • 2023-07
  • 2023-08
  • 2023-09
  • 2023-10
  • 2023-11
  • 2023-12
  • 2024-01
  • 2024-02
  • 2024-03
  • 2024-04
  • The present study tested whether children with recent

    2018-11-09

    The present study tested whether children with recent-onset tics are capable of suppressing their tics and if so, whether an environmental contingency (namely, reward) modifies this ability. Given evidence that reward can enhance inhibitory control in children without tics (Padmanabhan et al., 2011; Geier and Luna, 2012), we hypothesized that children with recent-onset tics would be able to suppress tics successfully when rewarded for doing so, even though they have little to no experience suppressing tics.
    Methods
    Results
    Discussion
    Conflict of interest statement
    Acknowledgements We thank Mary Creech and Samantha Ranck for subject recruitment, testing and study management, M. Jonathan Vachon for video editing and randomization, Douglas Woods for advice on setting up the tic suppression paradigm, Marcy Birner for help with recruitment, and the children and parents who generously gave of their time to participate. This work was supported by NIMH (K24MH087913), a Tourette Syndrome Association fellowship to DJG; the Washington University Institute of Clinical and Translational Sciences NIH grant (UL1RR024992, UL1TR000448; including REDCap [see Methods], the Recruitment Enhancement Core, WU PAARC [Washington University Pediatric & Adolescent Ambulatory Research Consortium] and the NeuroClinical Research Unit), and the Intellectual and Developmental Disabilities Research Center at Washington University (NIH/NICHD P30 HD062171). The content is solely the responsibility of the authors and does not necessarily represent the official view of the NIH or the TSA.
    Introduction One hallmark of adolescent risk taking is that, more often than not, it nuclear receptors occurs in the presence of peers (for recent review, see Albert et al., 2013). Although the customary explanation of this phenomenon assumes that it arises from explicit peer pressure to engage in risky behaviors, experimental studies of the “peer effect” on adolescent risk taking have demonstrated that the mere presence of peers can increase adolescents’ risk taking even when the adolescents are prohibited from directly communicating with each other (Chein et al., 2011; Smith et al., 2014), an effect that is not seen among adults. This finding suggests that a process other than explicit encouragement to behave recklessly explains why adolescents, but not adults, are more likely to take risks when with their friends. One explanation suggested by prior work is that, during adolescence, the presence of peers affects the way in which rewards are valuated and processed. In behavioral studies, for example, adolescents who are being watched by peers are more oriented toward immediate than delayed rewards (O’Brien et al., 2011; Weigard et al., 2013), and more inclined to pursue rewards even in the face of likely negative outcomes (Smith et al., 2014). Prior neuroimaging work further shows that during a risk-taking task, being observed by peers produces heightened activation selectively in brain areas associated with reward processing (e.g., the ventral striatum, VS), and not in other brain regions engaged by the task (e.g., lateral prefrontal cortex, lPFC) (Chein et al., 2011). Consistent with the behavioral evidence, this increased activation during peer observation is found among adolescents, but not among adults. There are no prior studies investigating how peers impact age differences during reward processing, but there have been several studies of age differences in reward sensitivity when individuals are alone (e.g., Bjork et al., 2004; Galvan et al., 2006; Padmanabhan, Geier, Ordaz, Teslovich, & Luna, 2011; Van Leijenhorst et al., 2009). Several such studies report age differences in striatal engagement during reward processing. The majority of studies show that relative to both children and adults, adolescents are more sensitive to rewards and show greater striatal activation in brain regions typically associated with reward processing (Barkley-Levenson and Galvan, 2014; Christakou et al., 2011; Galvan et al., 2006; Galvan and McGlennen, 2013; Geier et al., 2010; Hoogendam et al., 2013; Jarcho et al., 2012; Padmanabhan et al., 2011; Van Leijenhorst et al., 2009). There are also several studies, however, reporting a dampened striatal response to reward during adolescence (Bjork et al., 2004, 2010; Hoogendam et al., 2013; Lamm et al., 2014) and others that do not find any effect of age on striatal response (Benningfield et al., 2014; Krain et al., 2006; Teslovich et al., 2013; Van Leijenhorst et al., 2006). Despite these inconsistencies in the literature, which are likely due to differences in the specific tasks employed and the specific stages of reward processing under investigation (e.g., anticipation or receipt) (for recent review, see Richards et al., 2013), the weight of the available evidence seems to indicate increased striatal responding to rewards during adolescence. Whether this age difference in the activation of reward circuitry is moderated by the presence of peers, and whether any such moderating influences arise during reward anticipation, reward receipt, or both, is unknown.