Did you know 2018

 

 

2018:01 Retrospective studies into bloodstream infections and antibiotic resistance patterns

 

Global concerns about bacterial and fungal antibiotic resistance patterns and the emergence of ‘superbugs’ have led to appeals for antibiotic stewardship on the part of health practitioners. These are important issues and appropriately have resulted in formal reviews of rates of infection and development of resistance in various clinical settings. Of particular interest are the differences between rates and types of infection in community vs. hospital settings, and within hospitals between wards and high- or intensive care units.  Many important points emerge from such audits and retrospective reviews. For example a recently published article on bloodstream infection (BSI) from Cape Town reported a 5.7/1000 admission risk of positive culture in cases of community-acquired infection (CAI: positive cultures obtained if the pathogen was isolated within 48 hours of admission), and 8.5/1000 risk for hospital-acquired infection (HAI: positive cultures obtained >48 hours after admission). Overall risk of BSI was 6.7/1000 admissions and overall BSI rate was 5.7 per 100 cultures performed. Over the two-year study period there was a small drop in the number of hospital admissions subjected to blood culture studies, and fewer positive cultures. Of 693 unique bacterial and fungal episodes, just over one-third were community acquired and the balance related to the hospital itself or a referring facility.  E. coli, Staph aureus and Strep pneumoniae were the most frequent causes of CAI, and Klebsiella, Acinetobacter and Staph aureus were most common in HAI.  Resistance patterns were worrying e.g. one-third of E.coli isolates and almost 80% of K. pneumoniae were extended-spectrum beta lactamase (ESBL) producers. The latter were sensitive to carbapenems, but resistance to even these drugs was observed for Acinetobacter and Pseudomonas.  In this study (which mainly excluded early neonatal admissions) risk factors for mortality were severe underweight, severe anaemia at the time of admission, and ICU admission after diagnosis of BSI.  HIV infection was not a contributor. The authors were concerned about the low BSI yield (5.7 per 100 cultures performed) and cite possible reasons as inadequate volume of blood, inappropriate testing for BSI, and prior use of antibiotics as part of syndromic case management in primary care settings. There was also concern about the number of positive cultures considered to be contaminants (6.6%). This is double the internationally-accepted rate, and possibly speaks to poor clinical collection technique and sample processing.  As said, these are important findings but a problem with this review and other recent studies from South Africa is that they are reported many years after the fact. Surely if we are to have an impact on the problem we need concurrent, real-time audits and the ability to respond quickly rather than basing policy and practice on what happened 5-6 years ago.

Read more:

BMC Infect Dis 2017; doi 10.1186/s12879-017-2862-2

S Afr J Child Health 2017; 11: 170-3

BMC Pediatrics 2015; doi 10.1186/s12887-015-0354-3

 

 

 

 

2018:02 Does a tobacco 'sin tax' impact on paediatric smoke exposure and asthma severity?

 

South Africa has a long history of implementation of ‘sin taxes’ on alcohol and tobacco, usually defended on the grounds of public health and wellbeing. Sugar-containing beverages have recently been added as targets for taxation in 2018.  There is usually opposition to such taxes, with objections emanating from affected industries that cite production, revenue and job losses, and from free market proponents who complain about ‘nanny taxes’ and the State interfering with personal choice. Beneficial consequences often take years to prove, for example in reduced rates of chronic obstructive lung disease and/or lung cancer after tobacco taxation, so it is exciting when researchers are able to come up with outcomes that appear to be measurable over a brief period if not immediately. An example of the latter appeared in a recent article in Pediatrics in which researchers from Boston, Massachusetts reviewed data on the relationships between asthma severity and ‘severity’ of tobacco taxes and air quality across the United States. The source of the data was the Child Asthma Call-Back Survey (Child ACBS), which is a follow up to the Behavioral Risk Factor Surveillance System (BRFSS), a system of health-related telephone surveys that collects state data about U.S. residents regarding their health-related risk behaviors, chronic health conditions, and use of preventive services. BRFSS completes more than 400,000 adult interviews each year. The Child ACBS targets only those adults who acknowledged caring for a child with asthma, and questions focused on home environment, asthma triggers, symptom frequency, medication use and interference with daily activity. Asthma severity was calculated for each patient in accordance with National Heart Blood and Lung Institute guidelines and dichotomized into severe persistent vs non-severe (that included moderate persistent, mild persistent and intermittent). Child ACBS data were analysed in conjunction with the American Lung Association’s annual State of Tobacco Control reports which cover state-by-state tobacco taxes and air quality control measures. The analysis included 12 860 Child ACBS interviews from 35 states over the period 2006-2010 and showed that a higher tax grade was associated with reduced severity (odds ratio 1.4; CI 1.10-1.80). This evidence that taxation appears to impact smoking, children’s exposure to environmental smoke, and severity of the asthma rates is interesting and potentially exciting, but there are two caveats: 1) the cross-sectional nature of the data means that one doesn’t know whether tax increases in a particular state over time impacted on asthma severity; and 2) the  ‘lumping’ of severity categories into severe vs. non-severe (which actually included three categories ranging from moderate persistent to intermittent) could yield a different result if, for example, categorised as severe and moderate vs others, or intermittent vs. persistent groups.

Read more:

Pediatrics 2018 https://doi.org/10.1542/peds.2017-1026P

CDC https://www.cdc.gov/brfss

Ann Allergy Asthma Immunol 2015; 115: 396-401

 

 

 

2018:03 Mortality associated with Influenza and Respiratory Syncitial Virus (RSV) 2009-13

 

Followers of the work done by the South African National Institute for Communicable Diseases (NICD) will know the extent to which their research is contributing to policy in the areas of maternal and child health e.g. in relation to infection with HIV, Group B streptococcus, pneumococcus, RSV and the influenza viruses. Wherever possible the NICD utilizes hard data, for example from microbiological samples obtained in prospective studies, but often such data are combined with statistics available from national databases, which can often be problematic (e.g. in a recent article published by NICD researchers looking into deaths in- and out-of-hospital, 20% of records did not specify location of death). That limitation aside, the article employed sophisticated and elegant methodology to estimate as accurately as possible the national, age-specific, in- and out-of-hospital mortality rates for influenza and RSV for the period 2009 – 2103.  The authors make the point that it is important to estimate out-of-hospital deaths in order to obtain the extent of the virus-related mortality burden. While the research covered all ages, this summary will focus on infants and children under 5 years of age. For influenza-associated deaths the highest mortality rate was in the >75-year group (386/100 000 population) followed by infants (87.3/100 000), while for RSV the peak mortality was in infants (143.4/100 000), followed by the >75’s. In the under-fives 37% of RSV deaths occurred out of hospital, while for influenza the figure was higher at 51%. The authors attribute this difference to RSV affecting infants to a greater extent, leading to a greater chance of being hospitalized. However the overall estimates of respiratory mortality in <5’s were relatively similar for RSV and influenza.  Interestingly, in the <5’s the in-hospital deaths peaked between January and May (period of RSV circulation) vs. out-of-hospital deaths which peaked between May and July, coinciding with the peak in influenza virus circulation. While RSV infection is recognized as a risk for infants and children, particularly in those with cardiac and respiratory conditions, one wonders whether risk of influenza infection and possibility of vaccine administration is also considered. In this regard the authors of the article under review refer to the availability of vaccine through publicly-funded programmes since 2010, with approximately 1 million doses distributed annually. While the NICD guidelines and published recommendations clearly identify various paediatric sub-groups as being eligible for vaccination, one wonders how often these guidelines are followed. Another issue that should be noted in regard to the research is the importance of continuing such surveillance because a) rates of infection with influenza virus were influenced by pandemics during the study period and could/would change from one period to the next; and b) interventions such as HIV treatment and pneumococcal vaccination have had a a profound impact on mortality from the conditions under review. The cited article and this summary may be particularly timeous given that the US Centers for Disease Control and the media are warning that the 2017/8 influenza epidemic in the USA may be as bad as or worse than the 1918 epidemic, with 24 reported paediatric deaths in the first three weeks of January 2018. It remains to be seen whether the outbreak spreads to this country.

Read more:

Clin Infect Dis 2018; 66: 95-103

S Afr Med J; 2016; 106: 251-3

PLOs Med 2013; 10;e1001558  

 

 

 

2018:04 Risk of neural tube defects (NTDs) may increase with low carbohydrate maternal diet

 

South Africa may arguably have one of the highest percentages of consumers exposed to debates around the benefits and risks of a low carbohydrate diet. The issue has been debated in the press, scientific literature, television and even within the disciplinary processes of the regulatory Health Professions Council. It is therefore of interest that researchers in the USA have interrogated a large database, focusing on maternal low carbohydrate diet and the potential for birth defects in offspring. Having access to a subject-/topic-specific database is a wonderful asset when investigating epidemiologic issues – the problem is that if one comes up with a positive relationship and concludes that further research is necessary, such research is dependent upon others having access to similar data. The research under discussion was based on NTD risk mitigation in the USA, where there is mandatory fortification of all cereals and grains with 140μg of folic acid per 100g of product, and the possibility of folic acid deficiency occurring if/when mothers consumed low quantities of fortified cereals, pasta, bread etc. prior to and/or during pregnancy.  The authors used National Birth Defects Prevention Study data collected between 1998 and 2011, drawn from 3 states and 6 regions. Cases (n=1559) included live births, stillbirths ≥20 weeks and prenatally diagnosed terminations with at least one eligible defect. Controls (9543) included live births without a birth defect. There were 516 cases of anencephaly and 1043 cases of spina bifida. Carbohydrate and folic acid intake before conception were estimated from food frequency questionnaire responses obtained from a one-hour telephonic interview.  Analyses involved logistic regression, adjusted for maternal race/ethnicity, education, alcohol use, folic acid supplement use, study centre and caloric intake.  Restricted caloric intake was defined ≤5th percentile of control levels, which corresponded to ~95g/day. This also corresponds to the target recommended for most low carbohydrate diets.  Results showed that mean dietary intake of folic acid among women with restricted carbohydrate intake was less than half of that of other women, and women with restricted intake were more likely to have an infant with a neural tube defect (OR 1.30; 95% CI: 1.02, 1.67). Of note is that compared to women with non-restricted carbohydrate intake, those with restricted intake were more likely to be older, non-Hispanic white, USA-born and with more years of education and with higher household income.  There were no differences in BMI, smoking, prenatal vitamin and/or folic acid supplement use (possibly explained by the supplement being insufficient to overcome the deficiency introduced by low carbohydrate intake).  Overall, as daily carbohydrate intake decreased, the association with NTDs increased.  Pending confirmatory studies the authors advise health professionals caring for women prior to and during pregnancy to be aware of their patients’ dietary practices and the potential for folate insufficiency in those following a carbohydrate restricted diet.

Read more:

Birth Defect Res 2018 doi: 10.1002/bdr2.1198

S Afr Med J 2016; 106: 1179-82

Obesity res 2001; 9 (Suppl 1): 1s-40s

 

 

 

2018:05 More concern about antenatal steroids for late preterm and early term deliveries

 

This topic is back in this series of summaries because it continues to be aired in the literature; perhaps more in the neonatal and perinatal literature than in the obstetric. All are in agreement that the case has been made for administration of steroids to a woman at risk for delivery before 34 weeks; the questions are around administration beyond that gestational age. The last summary of 2017 dealt with UK guidelines and encouragement for obstetricians to administer steroids to women at risk for delivery between 34 and 38+6 weeks. A recent commentary in the Journal of Perinatology addresses the guidelines published by the American College of Obstetricians and Gynecologists, and fairly aggressively questions whether in this preterm group the risks to the neonate and child are outweighed by the benefits. These benefits have been neatly described in terms of ‘NNTs’ i.e. number of women needed to treat with antenatal steroids in order to prevent one case of the index outcome: ‘respiratory support’ (broadly defined) – NNT 35; Transient Tachypnoea of the Newborn (TTN) – NNT 31; surfactant use – NNT 77. These NNT’s are for the total group across the 34+ weeks range; they are higher as gestation increases, obviously because the foetus is nearing term and maturity, and is less likely to have the problems.  The objectors to the practice are concerned about the quite large numbers of such neonates treated unnecessarily, essentially to prevent TTN which is regarded as a fairly benign condition, but in terms of the unnecessary treatment are concerned about the potential risks of the steroids. Summary 1740 discusses the issue of the steroids not being metabolisable by the more-mature foetus and potentially affecting the brain, and the detractors are also concerned about neonatal hypoglycaemia that has been observed in number of studies.  From a South African perspective, the ‘elephant in the room’ is the potential for obstetricians to misuse the guidelines which are essentially developed to assist doctors faced with an inevitable late preterm or early term delivery, or an operative delivery that is being performed for bona fide foetal or maternal indications. The guidelines are not intended to protect intrinsically low-risk neonates delivered by elective caesarean section within the time-frame under discussion for non-medical reasons.  There is little justification for placing such a neonate at risk and then mitigating that risk by administering maternal steroids, and in a number of cases adopting a belt-and-braces approach by also adding neonatal surfactant to the mix.

Read more:

J Perinatol 2017; 37: 1265

Arch Dis Child Fetal Neonatal Ed 2017; 102: F284-5

Am J Obstet Gynecol 2016; 215: 423-30 

 

 

 

2018:06 Do we need to do more to detect developmental dysplasia of the hip (DDH)?

 

Examination of the newborn is on the list for every medical student during her/his paediatric ward exposure. Following graduation and on assignment to a paediatric service, examination for any sign/s of abnormality becomes part of the routine for whoever is responsible for the neonate, be that as an intern, registrar, medical officer or consultant. Examination for DDH is a universal and important element as there is awareness of the consequences of missing an abnormality that has an excellent prognosis if detected and treated early. The latter statement conforms to the condition which is now recognized to be a developmental problem, rather than simply a congenital one, as suggested by its former terminology (CDH i.e. congenital dysplasia of the hip).   The condition is stated to be the most common musculoskeletal disorder in infancy, with a reported prevalence of 0.5% to 4% depending on age, ethnicity and method of ascertainment.  Even at the lower end of the prevalence spectrum, the potential burden is impressive e.g. in a busy public hospital delivering ~30 000 babies per annum one could be looking at several affected neonates per week.  Perhaps on the positive side, since one is not seeing a similar number of older infants and children presenting with the signs and symptoms of untreated DDH, one may assume that many that are potentially detectable at birth undergo spontaneous resolution, but there are also some that may present later in life as osteoarthritis.  To return to the question of detection around the time of birth, ideally one should at least have clinicians who are proficient at diagnosing the condition, so one must ask whether enough is being done to ensure that those responsible for the neonatal examination are indeed competent.  There are also suggestions that additional screening should be performed. In Europe ultrasound screening is the norm in many countries, but there is still a lack of robust evidence to support the practice, so selective screening is practiced by others e.g. with breech delivery or positive family history. In a recently-published study from Norway, 4245 neonates were examined clinically and by means of ultrasound, and 90 (2.1%) subsequently treated with a Frejka pillow (a device that keeps the infant in a position similar to that achieved with the Pavlik harness).  Indications for immediate treatment in 63 were positive Ortolani or Barlow manoeuvres and/or sonographic dysplasia. Those with stable hips but sonographic ‘immaturity’ were re-scanned and treated where appropriate (n=27). All with clinically and sonographically normal hips were followed up until 4 years of age. Late presentation occurred in 2 cases.  Fifty-five of the 90 were clinically negative but had sonographic evidence of joint immaturity or dysplasia, while 30 were positive on both clinical and sonographic examination.  Only 5 were treated on the grounds of clinical examination alone. No patient required surgical correction, and there were no cases of avascular necrosis.  Overall the authors conclude that adding universal ultrasound to clinical screening doubled the treatment rate without influencing the already low numbers of late/missed cases. However one should note that in this study all clinical examinations were carried out by a single, highly experienced paediatrician.

Read more:

Acta Paediatr 2018; 107: 255-61

J Pediatr Orthop B 2017  doi.10.1097/BPB0000000000000463

Arch Dis Child 2014; 99(Suppl 2): PO0692

 

 

 

2018:07 Low levels of arachidonic acid and development of retinopathy of prematurity (ROP)

 

Rapid brain and retinal development take place in the foetus during the third trimester and long-chain polyunsaturated fatty acids (LC-PUFAs), which are structural and functional components of most cell membranes, are transferred from the mother. Both docosahexaenoic acid (DHA) and arachidonic acid (AA), the most abundant LC-PUFAs in the central nervous system, are selectively transferred. The relative proportion of AA in fetal blood is 2-fold higher than in maternal blood during this trimester, and fetal DHA level increases above maternal levels after approximately 30 weeks of gestation. Although DHA is the most abundant FA in cell membrane lipids in the brain and retina, AA is a dominant FA in membranes of the vascular endothelium. With current standard care, extremely preterm infants receive insufficient amounts of both DHA and AA in parenteral nutrition, as reflected in low serum levels of these fatty acids. In studies performed in Europe, Turkey, and Australia, the use of dietary supplementation with LC-PUFAs has been inconsistent in preventing ROP, with a reduction in any ROP but not a reduction in severe ROP in very preterm infants in one study, and no reduction of ROP with enteral DHA supplementation or with parenteral nutrition in a second study. Fish oil supplementation has been shown to affect serum and plasma levels; however, the influence on longitudinal levels from birth to 40 weeks’ postmenstrual age has not been evaluated in extremely preterm infants, who are at most risk of developing severe ROP. The aim of a Swedish study involving 78 neonates born at 25.5±1.4 weeks and 797±223g was to examine the association between circulating levels of LC-PUFAs and development of ROP. Low postnatal levels of AA were strongly associated with and highly predictive of later ROP (both ‘any ROP’ and severe ROP). This has not been reported previously.  Serum AA decreased significantly after birth and remained significantly lower throughout the postnatal course in infants with ROP compared with infants with no ROP.  Overall, analysis showed that three of 8 analyzed LC-PUFAs were associated with ROP and the most predictive in logistic modeling was AA, even after adjusting for GA. Arachidonic acid is particularly enriched in the vasculature, and its metabolites can both stimulate and inhibit angiogenesis and inflammation, both of which are important in the development of ROP. In addition, AA derivatives stimulate both relaxation and contraction of blood vessels, and AA metabolites contribute to neurovascular coupling in the retina. The study accords with previous work proposing that an important component of preterm morbidities, such as ROP, is impaired vascular integrity resulting from a lack of DHA and AA in endothelial membranes. The authors conclude that measuring AA levels after birth in such preterm infants may be useful for ROP prediction, but more importantly, comment on the need for research into AA supplementation as a means of preventing ROP.

Read more:

JAMA Ophthalmol 2018; doi.10.1001/jamaophthalmol.2017.6658

Clin Nutr ESPEN; 2017;20:17-23 

Nutrients 2016; 8: 216

 

 

 

2018:08 Evaluating networks of newborn cortical activity

 

Preterm birth of extremely- and very low birthweight (ELBW and VLBW) infants is regarded as contributing greatly to the burden of long-term neurocognitive, behavioural and even psychiatric problems later on in life. In the assessment of risk for later deficits, clinicians rely on clinical examination, special investigations such as electroencephalography and radiography, neurological assessment and employment of various neurodevelopmental tools. Developments in recent years have added to the level of understanding in terms of brain structure and function. For example, summary 1614 comments on the use of functional neuroimaging to effectively develop ‘charts’ that plot the organization of intrinsic connectivity patterns and enable users to correlate deviations from the norm with abnormalities e.g. in attention span.  Using a different approach, as discussed in summary 1704, total brain volume and regional volumes have been studied in VLBW infants and differences found to correlate with psychiatric (autism spectrum), psycho-social and attention problems.  To this list of modalities one may add another technique which involves sophisticated analysis of EEG together with brain-modelling and custom-developed software. As such, this ‘tool’ is unlikely to be available for widespread use, but does add to the body of knowledge. In essence, Finnish researchers studied 46 extremely preterm (EP) infants  (gestational age 26.2±1.6 weeks) and 67 healthy controls (HC) born at 40.4±1.8 weeks. All were studied electrophysiologically at actual or corrected term, and the EP group underwent neuro-assessment. The key EEG measures were assessment of amplitude and phasic inter-relationships/correlations between various cortical areas i.e. the assessment of connectivity networks.  Such correlations have been found to be sensitive to early brain lesions and may be predictive of compromised neurodevelopment.  For purposes of analysis their neuro-assessment (based on Dubowitz et al’s Hammersmith Neonatal Neurological Examination) was categorised into elements that were associated with motor development, and others that related to cognitive and social development.  Their results showed that prematurity affects amplitude-amplitude correlated networks in the frontal region and also results in lower network density. They also found correlations between phase-phase correlated networks in the frontal and fronto-occipital connections and with the cognitive-social neuro-assessment score. This research is consistent with other work that has shown that prematurity affects frontal sub-cortical microstructure and development in cognitive functions assigned to frontal regions. Perhaps significantly, animal experiments have shown that reduction in networks leads to neuronal apoptosis.  While this research may provide a greater level of understanding, the authors also indicate that various functional measures of neuronal network activity may in the future be useful to identify at-risk infants and measure effects of existing and new therapeutic interventions.  It would also be useful for the tests described in the study to be repeated for EP infants over time to assess whether there have been changes in networks.

Read more:

Cerebral Cortex 2018 doi: 10.1093/cercor/bhy012

Neuroimage 2017; 149: 379-92

Neurology 2017; 88: 614-22        

 

 

 

2018:09 Parvovirus B19 infection in children with acute myeloid leukaemia (AML)

 

Current texts on viral infections in childhood and adolescence comment on parvovirus B19 as having a global distribution and commonly infecting humans.  Antibody prevalence increases throughout life with ~70% of adolescents having IgG antibodies and almost 90% of the elderly testing positive.  Vertical transmission following infection during pregnancy is well known, affecting 25-45% of foetuses.  The virus shows a tropism for erythroid progenitor cells in bone marrow, with replication and cell lysis causing a disruption in red cell supply. This may manifest as a transient aplastic crisis.  The virus can also infect megakaryocytes, causing thrombocytopaenia.  Erythema infectiosum is the commonest manifestation of acute infection in children. Arthropathy, regarded as an immune-mediated process, occurs infrequently in children but is seen in up to 80% of adults. Immunocompromised patients may develop pure red cell aplasia and/or persistent infection with B19. Rare clinical manifestations include myocarditis, hepatitis, encephalitis and chronic fatigue syndrome. This probably conforms with what many/most clinicians know, but perhaps one might want to add to this list the relationship between B19 and childhood leukaemia. Certainly anaemia and other cytopaenias during the course of leukaemia can be attributed to factors such as marrow infiltration, chemotherapy or viral (e.g. cytomegalovirus or B19) infection, but of note is the question of whether B19 infection is opportunistic or plays a role in the pathogenesis of some cases of acute leukaemia.  Studies have found up to 18% of acute lymphoblastic leukaemia patients testing positive for B19, while a recent study found much higher positivity rates in AML.  Researchers from Mansoura University in Egypt studied 32 children recently diagnosed with AML (i.e. prior to induction therapy), 16 children during induction, and 60 age and gender-matched controls.  Cases were collected between December 2014 and June 2016.  Blood was taken for routine haematological tests, B19 was tested for by IgG and IgM ELISA, and B19 DNA detection was by PCR.  Recent infection was diagnosed by positive IgM and/or PCR results, while past infection was diagnosed by a single positive IgG result.  In the 32 newly-diagnosed cases recent infection was determined by PCR±IgM in 16 and by IgM alone in 8, while in the 16 patients receiving chemotherapy PCR±IgM was positive in 8 and IgM alone in 7. Taking recent and past infections together. the majority of AML patients had evidence of infection: 87,5% at the time of diagnosis, and 68,7% during induction, whereas controls showed only a 15% positivity rate.  Recently-infected cases had significantly low haemoglobin levels (4.2±0.1g/dl), platelets (78.28±7.2 x 103/mm3) and neutrophils (5.4±1.5), while those infected in the past were closer to controls.  Such a strong association between the virus and AML no doubt needs to be confirmed, but does make the case for B19 potentially having a role in the pathogenesis of the disease, at least for some patients.  Literature on the subject includes the delayed infection hypothesis, the two-step mutation model, and specific DNA-methylation patterns in susceptible B-precursor cells.

Read more:

Asian Pacific J Cancer Prev 2016; 19: 337-42

Curr Probl Pediatr Adolesc Health care 2015; 45: 21-53

Epigenetics 2011; 12: 1436-43

 

 

 

 

2018:10 Aspirin for mothers at risk for pre-eclampsia (PE) significantly improves neonatal outcome

 

International literature cites 2-3% of pregnancies being affected by PE, however the incidence in sub-Saharan Africa is reported as being 2-3x higher.  The progression of PE is a major cause of mortality and morbidity for the mother, and perinatal risk of death and long-term handicap for the baby. In the USA it has been estimated that in 2012 the cost of PE within the first 12 months of delivery was $2.18 billion and was disproportionately borne by infants of low gestational age and birthweight.  Much effort has gone into research aimed at preventing PE and the development of eclampsia which, certainly in South Africa, has been reported as the major contributor to maternal morbidity and mortality within the large group of deaths related to hypertension during pregnancy.   In the last decade research has produced methodologies that are capable of screening for risk of PE at 11-13 weeks by means of a combination of maternal demographic and medical characteristics, mean arterial pressure, uterine artery pulsatility index and serum placental growth factor.  Approximately 10% screen positive, and the methodology detects around 75% of PE women who deliver at <37 week, and 90% of those with early PE and deliver at <32 weeks.  Meta-analyses have shown that maternal aspirin administration at ≥100mg per day started before 16 weeks reduces PE risk by >60%. Armed with these data a large multi-country, double-blind, randomized study was carried out, confirming that aspirin at 150mg per day from 11-14 weeks until delivery or 36 completed weeks reduced preterm PE by 62% and early PE by 89%.  However the researchers were surprised to find that the rate of neonatal intensive care unit (NICU) admissions was similar between the aspirin and placebo groups.  However secondary analysis of 1571 neonates has recently been reported, showing that while admission rates to NICU were similar (6.2% vs 6.8%), length of stay (LOS) for the offspring was indeed significantly different (11.1 vs 31.4 days; p=0.008).  The bulk of the intergroup differences were due to the infants born before 32 weeks (9 of the 777 from the aspirin group and 23 of the 794 in the placebo group). After 32 weeks there is flattening of the LOS in NICU for both groups. In other words the analysis revealed that aspirin prevented early PE, extended the duration of pregnancy and reduced the number of deliveries prior to 32 completed weeks.  The estimated cost impact of the intervention was $5.6m, well in excess of the cost of screening. While full clinical details of the NICU admissions was not given, it would appear that intrauterine constriction of the ductus arteriosus (which has previously been reported following maternal aspirin ingestion) was not an issue, possibly  related to the dose used in these subjects.

Read more:

Am J Obstet Gynecol. doi :10.1016/j.ajog.2018.02.014

N Engl J Med 2017; 377: 613-22

Z Geburtshilfe Neonatol 2005; 209: 65-8

 

 

2018:11 Does Hepatitis B vaccine (hBv) in infancy confer long-term immunity?

 

The United States CDC’s guidelines for hBv immunization state that studies indicate that immunologic memory remains intact for at least 30 years among healthy vaccinated individuals who initiate vaccination at >6 months of age. This confers long-term protection against clinical illness and chronic hepatitis. Cellular immunity appears to persist even though antibody levels might become low or decline below detectable levels. However long-term follow-up studies are ongoing to determine the duration of vaccine-induced immunity among cohorts in which hBv was initiated at birth.  An example of the former (i.e. vaccination at >6 months) would be an Alaskan study in which ~1600 adults and children were followed up. The cohort was tested yearly for the first 11 years and then 15, 22 years and 30 years after their first dose.  After 22 years 60% had a protective level of antibody to hepatitis B surface antigen (≥10 mIU/ml) and 93% had antibody or responded to a booster dose. No subjects had new or chronic HBV infection.  A subset of the original study was studied after 30 years:  51% of 243 who responded to the primary series and had no additional booster had anti-HBs levels ≥10 mIU/ml after 30 years, and 88% of 88 who were found to have anti-HBs <10 and were boosted during the follow-up responded within 30 days. Initial anti-HBs levels after the primary series correlated well with levels at 30 years. The authors concluded that at 30 years ≥90% had evidence of protection and that a booster dose was not necessary (because of the rapid response to booster and clear evidence of immunological memory and antigen recognition). They also state that those vaccinated later (at 5-19 years) had higher levels of immunity than those immunized at <5 or >20 years, and comment that infants do not respond to boosting as readily as young children.  To address the question of immunity in those immunized around birth, Israeli researchers studied >20 000 subjects tested at a mean age of 14.8±5.4 years.  Mean anti-HBs antibody levels declined with time, and after 15 years 66.7% had ‘negative’ results. However, as with the Alaskan study there was an excellent response to a booster injection (93.8% of 644 who were boosted because of low antibody levels and the subject entering a high-risk environment e.g. a health-related field). Hepatitis B infection (as evidenced by presence of HBs antigen) was found in 91 of the original 20 634 who were tested. Because of the latter finding the authors conclude that one might consider a booster dose in adolescents but there is insufficient evidence to support routine boosting. For the sake of safety one might also consider a booster dose for individuals entering the healthcare field.

Read more:

Vaccine 2018; 36: 2288-92

Epidemiol Infect 2017; 145: 2890-5

J Infect Dis 2016; 214: 16-22   

   

 

 

 

2018:12 Infant mortality rates and causes of death in the United States among full-term births

 

Many summaries in this series include a comment about the value of large volumes of reliable data from national databases. Sadly, South Africa has a long way to go in this regard. As shown in summary 1635 which deals with the value of accurate congenital disorders data, while the country embarked on a surveillance initiative in 2001, utilization of the Birth Defect Notification Tool at national level between 2006 and 2014 was extremely poor, reporting on less than 2% of congenital disorders expected for the population.  Health information collection is an explicit goal in the country’s plan for NHI (National Health Insurance), hopefully one that will be successfully achieved.  In contrast to the local situation, US data were available for >10 million term births and deaths for the period between 2010 and 2012. The goal was to explore how the US overall and within its states compared to top-performing countries in terms of full-term infant mortality (FTIM) and was in response to data showing that such births in the US face substantially higher mortality risks than is the case in a number of European countries. In fact, according to estimates the US ranked 44th among 199 countries, with an infant mortality rate of 5.6/1000 live births in 2015 that was three times greater than the top performers in the ranking.  Given the advanced state of perinatal and neonatal care in the US it is not surprising that mortality in the neonatal period compares well to rates in other high-income countries, and it was the post-neonatal period that was responsible for most of the high FTIM rates. Deaths in live-born term infants between 37 and 42 weeks gestational age were categorised as being due to congenital malformations, sudden unexpected death in infancy (SUDI), perinatal conditions or all other causes. Federal State in which the infant was born, maternal age, race, education, smoking history and presence of diabetes and/or hypertension were recorded, as were gestational age, gender and birthweight of the infants.  Overall FTIM rate was 2.19/1000, ranging from 1.29/1000 for Connecticut to 3.77/1000 for Missouri.  FTIM for 10 states was classified as good, 17 as average, 11 as fair and 13 as poor. SUDI accounted for 43% of the deaths, congenital malformations 31% and perinatal conditions 11%.  The risk of SUDI ranged from 5.6/10 000 live births in states with lower FTIM rates to 15.4 in states with high FTIM rates.  Mortality risk from congenital malformations was also higher in high FTIM states (8.4/10 000 vs 5.9). Again, possibly reflecting the quality of peri- and neonatal care between states, the mortality differences for perinatal conditions were small.  Within the SUDI group, 42% were due to SIDS (sudden infant death syndrome) and 16% were due to accidental suffocation and strangulation. The authors found that factors such as state-level maternal age, education, race and health status impacted on SUDI and SIDS rates, and conclude that a substantial number of the 7000 full term infant deaths each year could be prevented (~4000 if all states performed as well as the best performers), possibly through targeted information and behavioural change interventions with an emphasis on SIDS and suffocation prevention in relation to “sleeping arrangements.”  An interesting point made in the article relates to differences in inter-state FTIM rates due to congenital malformations, and these differences possibly bearing a relationship to differential access to legal abortion. A legitimate question is whether such a study is relevant in South Africa with its high post-neonatal mortality rates still largely due to poverty, malnutrition and infections e.g. pneumonia and gastroenteritis? The answer is that until and unless we are able to capture the data and analyse accurately we won’t know the relative contributions of congenital conditions, SUDI and SIDS.

Read more:

PLoS Med 15(3):e1002531 2018

Natl Vital Stat Report. 2014;63(5 1-6

Pediatrics 2003; 111: e347-54

 

 

 

2018:13 Impaired left ventricular function following preterm birth

 

Extremely interesting research is currently being done relating to postnatal ventricular structure and function in infants born preterm, much of the work emanating from Oxford University in the UK, supported by other credible data. For example, researchers in Sweden reported on the incidence of early heart failure with and without structural heart disease in children, adolescents and young adults. They found that even without a history of congenital heart disease or other structural problems a disproportionate number of individuals born preterm developed heart failure. The risk was 17-fold greater in those born at <28 weeks vs. a 4-fold risk if born between 28-31 weeks.  These results feed into previous research in animals and humans that showed greater postnatal cardiomyocyte hypertrophy in preterm vs. term controls. Here it was shown that left ventricular (LV) mass increased by 56% in preterm neonates during the first postnatal month vs. 35% in those born at term. This hypertrophic response in the preterms was disproportionate to any simultaneous weight or length catch-up growth.  Young adults born preterm show similar patterns with LV mass inversely related to gestational age and unrelated to variations in blood pressure.  It is not clear why these cardiac changes occur, but it has been postulated that factors such as altered immune, respiratory or vascular development may act as a trigger and result in such individuals being less ‘resilient’ during later life.  Whatever the trigger, the Oxford group found that such infants fed breast milk had higher cardiac indices e.g. stroke volume index vs. formula-fed preterm infants.  Adding to the body of knowledge is a recent study from Oxford that included 47 normotensive ±23 year-old adults born preterm and 54 born at term.  All were subjected to exercise testing at 40, 60 and 80% of peak exercise capacity.  The preterm group had greater LV mass but ejection fraction (EF) was similar to term controls at rest. By 60% exercise intensity EF was 6.7% lower in the preterm group, decreasing to 7.3% at 80%.  Submaximal cardiac output reserve was 56% lower in preterm subjects at 40%. This risk of reduced cardiac reserve in adults who were born preterm and subjected to physical exercise during subsequent years might help to explain their increased risk of early heart failure. In the aforementioned Swedish study there was a bimodal distribution of early heart failure at <5 years and during very early adulthood.

 Read more:

J Am Coll Cardiol 2018; 71: 1347-56

Pediatr Res 2017; Volume 82/Number 1/July 2017

Pediatrics 2016; 138:pii:e20160050

 

 

 

2018:14 Adolescent cannabis use and risk of psychosis

 

Summary 1732 in this series touches on the subject of legalization of marijuana, with particular emphasis on so-called medical marijuana. As discussed, variants of marijuana contain differing concentrations of cannabidiol (CBD) and tetrahydrocannabinol (THC), the latter being primarily responsible for the drug’s psychoactive properties.  ‘Medical marijuana’ should ideally be a defined product or range of products that is regulated in terms of CBD:THC concentrations, production, quality and distribution, however the reality appears to be that ‘medical marijuana’ actually applies to all forms of the drug but administered for ‘medical reasons’ that range from anxiety and insomnia to disseminated and invasive cancer. In many countries the drive for legalisation of medical marijuana is taking place in parallel with the drive to legalise access for recreational use Proponents emphasise that the drug is safe, and over-use/abuse may not even be the fault of the user since CUD (cannabis-use disorder) is now a recognized DSM-5 condition, possibly sharing genetic factors with predisposition to psychosis in young adults. In order to tease out the independent risk of psychosis in adolescent cannabis users, Finnish researchers tapped into a number of databases in order to follow a birth cohort that included ~9500 subjects who received questionnaires at 15-16 years of age and were followed for another 15 years.  The focus was on the amount of cannabis use at 15-16 and the outcome was a subsequent diagnosis of non-organic psychosis.  Data collection included age, gender, smoking and substance abuse other than cannabis, family structure, place of residence, socioeconomic status and parental psychosis. The final sample of 6534 included 375 who had used cannabis at least once. Psychoses emerged in 124 subjects; 39 in schizophrenia categories, 12 bipolar with psychotic episodes, 19 major depression with psychotic features and 54 other psychotic episodes. Parental psychosis and other drug use contributed to the risk of psychosis, but when the risks of psychosis were restricted to cannabis the drug emerged as a dose-dependent factor in 18 of the 375 users (4.8%), which was almost three-fold higher than the risk of psychosis without cannabis use. The hazard ratio for cannabis causing psychotic depression was 9.74 and for schizophrenia spectrum disorder was 11.18.  It is interesting to note that the authors state as a limitation of the study the fact that cannabis is not a homogeneous product, that THC:CBD ratios have increased over the years, and the results of the Danish study might not accurately reflect the current situation. While the drug does not appear to be universally dangerous, it certainly does not appear to be universally safe.

Read more:

Brit J Psych 2018; 212: 227-33

Schizophr Bull 2016; 42: 1262-9

World Psychiatry 2016; 15: 195-204   

 

 

 

2018:15 Risk of cannabis use in adolescent e-cigarette smokers

 

When it comes to teenage and adolescent substance abuse rates in South Africa it is probably a case of ‘knowing what we don’t know,’ as well as ‘not knowing what we don’t know’ about the problem/s in this country. As health advocates Americans are no doubt ahead of the game, utilising research, large scale surveys and comprehensive databases. Consequently there are data to show that as a result of marketing and social media promotion, vaping of nicotine products and vaping of cannabis are both gaining popularity. In fact, e-cigarettes and marijuana are currently the two most commonly-used substances by teenagers. Furthermore, inroads have been made into utilisation among younger high school students who previously would have been regarded as being at low risk for substance abuse, and some research has shown that e-cigarettes may have been initiated in children as young as 7 years of age. The American FDA has extended its authority to regulate e-cigarettes through measures such as mandatory age and photo identification checks in order to prevent sale to minors, and further measures to prevent youth use are under consideration (e.g. expanding smoke-free policies and tobacco regulation).  In terms of the latter it is interesting to note that discussions are currently taking place in South Africa around further restricting smoking in public and also around regulating tobacco products, which would include e-cigarettes. Summary 1732 refers to existing and potential regulation of cannabis in South Africa, but this is taking place while Lesotho and now Zimbabwe have both legalized commercial production of ‘medicinal marijuana.’ How effective regulation will be is questionable when a quick Google search shows how easy it is currently to obtain both vaping products and marijuana on-line. So one is back to the issue of vaping increasingly being seen by children, teenagers and adolescents as ‘cool’, and cannabis is increasingly being seen as harmless. The previous summary deals with its ‘harmlessness,’ and a recent article in Pediatrics deals with the doubling of risk to progress to cannabis in e-cigarette users vs. non-users. Risk factors for progression to marijuana included female gender, African American origin and having lower grade performance at school. Most likely many readers of this and the previous summaries on the topic will discard and disregard them as rantings of an arch conservative from the previous century. This might be partly true, but if nothing else one should ask whether we as child and adolescent advocates should be doing more to protect our country’s future generation from some of the hazards of easy access to potentially harmful products. Clearly whereas e-cigarettes were introduced in order to provide (mostly older) smokers with a safer alternative to cigarettes or as a path to quitting the habit, in the case of the young, vaping appears as a glamourous lifestyle option.

Read more:

Pediatrics 2018; 141(5): e20173787

Prev Sci 2016; 17: 710-20

J Adolesc Health 2015; 56; 139-45

 

 

2018:16 Vaginal birth after caesarean section (VBAC) and neonatal outcome

 

Several summaries in this series relate to the high caesarean section (C/S) rate in South Africa’s private health sector and the indications for the C/S, one of which is repeat for previous C/S. As such there is an inherent ‘inflator’ built into the equation. Many health systems challenge the dictum of ‘once a caesar always a caesar’ and are proponents of VBAC, acknowledging the risks but believing that such risks are outweighed by the benefits. In order to identify risk factors for poor outcome, Norwegian researchers interrogated national databases covering ±2.5m births between 1967 and 2008, with the focus being on uterine rupture in the mother and outcome for the foetus. While uterine rupture did occur in ‘unscarred’ uteri, the majority of the 244 cases identified occurred in women who had undergone previous C/S (97% after only one previous C/S).  While not specifically analysed in the article, it would appear from the data that the risk of rupture might be greater in pregnancies that go beyond 41 weeks vs. those between 37 and 40 weeks, with only ~5% rupturing when labour occurs at ≤28 weeks. Almost 50% of the offspring were healthy at birth and did not require admission to the NICU, slightly less than a quarter required admission for severe asphyxia or ‘other causes,’ and only two of the asphyxiated infants had cerebral sequelae at 5 years of age. Slightly more than a quarter were classified as intrapartum or neonatal deaths.  The statistic of 64 deaths from 244 uterine ruptures is significant, but the figure that is lacking from the article is the C/S rate for the population under review. A Google search indicates that the Norwegian rate is low at 6.6%, and applying that to the population studied here suggests that ~160 000 individuals would have given birth via C/S. however what is stated in the article is that in the population under review 64% of the women with a history of C/S elected to proceed with VBAC, and 84% were delivered successfully. Putting this into the perspective of the 64 deaths, the rate would be something like 7/10 000 VBACs, which many would argue is an acceptable complication rate vs. the potential complications from a similar number of C/S deliveries. In terms of risk factors for foetal/neonatal death, sudden loss of contractions, delivery after midnight, placental separation, foetal extrusion and time to delivery of >20 minutes were related to adverse outcome. It is relevant to ask how/if these VBAC data which are based on a Norwegian C/S rate of <10% could be applied to the local private sector in which the C/S rate is >70%?  Clearly many more pregnancies would potentially be at risk, much more intrapartum monitoring of deliveries would be necessary, and if/when rupture is detected (which could occur during the night), would our ‘systems’ be in a position to offer outcomes similar to those achieved in Norway? This is not to say that favourable outcomes are not achievable in South Africa, but rather to suggest that the first step is to reduce the C/S rate in order to provide a more-manageable patient base.

Read more:

M J Obstet Gynecol doi.10.1016/j.ajog.2018.04.010

J Mat Fetal Neonat Med 2013;26: 183-7

Clin Perinatal 2008; 35: 85-99    

 

 

 

2018:17 Vitamin D for primary and secondary prevention of asthma?

 

This is posed as a question rather than as a statement because while there are data to support a relationship between the vitamin and the condition, there are still questions as to if/when and how to intervene.  This is apparent from an excellent review on the subject in that the authors conclude in a Table of clinical recommendations that vitamin D supplementation is safe in patients with insufficiency or deficiency and has positive benefits to health in asthma and other diseases; regular doses should be administered rather than infrequent high-dose boluses; supplementation should be given as vitamin D3 but exact 25(OH)D level at which supplementation should be started, best dosing regimen and optimal target for asthma prevention is not absolutely clear, nor is it clear whether 25(OH)D is the best measure of sufficiency; maternal pre- and intra-pregnancy vitamin D status is important and high dose supplementation should be provided to achieve sufficiency. The authors make a case for vitamin D insufficiency/deficiency being both causal in the development of asthma, and also the consequence of asthma, the latter as a result of the disease being ‘D-consumptive’ or because severe asthma may reduce the amount of time patients spend in the sun.  Backing their recommendations is a body of research indicating that the vitamin has profound paracrine actions throughout the body, and in particular has major effects on the immune system. Reference is also made to a current epidemic of D insufficiency as a result of ‘modern lifestyles’ that limit sun-exposure and prevent natural production of the vitamin in the skin.  Cross-sectional epidemiologic studies have consistently shown more-severe asthma in patients with lower vitamin D levels. An immunologic role is described on the basis of a failure of suppression of inappropriate antigen-dependent immune responses, for instance as a result of an effect on regulatory T-lymphocytes, or through an effect on ‘antigen presentation’ by dendritic cells. Vitamin D also suppresses production of IgE by B-lymphocytes, and vitamin D deficiency in children is associated with increased aeroallergen-specific IgE.  The vitamin also has a suppressant effect on mast cell activation, thereby reducing histamine and TNF-α release. In non-allergic asthma, vitamin D stimulates bronchial epithelial cell production of a ‘decoy-blocker’ for IL-33, thereby decreasing the pro-inflammatory effect of the IL-33 on mast cells.  A role for vitamin D during foetal life has also been proposed, with effects on lung maturation, materno-foetal tolerance and even on the child’s early-life biome.  No claims are made for administration of the vitamin in pharmacological doses for prevention or management of asthma in the presence of vitamin D ‘sufficiency,’ but one should clearly be cognisant of a likely role in patients or populations with or at risk for insufficiency or deficiency.

Read more:

Chest 2018; 153: 1229-39

JAMA 2014; 311: 2083-91

Am J Clin Nutr 2007; 85: 860-8 

 

 

 

2018:18 What about folic acid supplementation and cleft lip and palate?

 

Summary 1725 in this series includes a reference to research into possible deleterious effects of folic acid on children when high doses are administered to the mother during the periconceptional period: offspring of one-third of women who took dosages ≥1000µg/day displayed adverse neuropsychological outcomes, in particular lower global verbal and verbal memory scores. This caution against administration of high/higher dosing during the periconceptional period contrasts with a statement in a recent report from Iran on the effect of folic acid supplementation on development of non-syndromic cleft lip and palate. In the latter review and meta-analysis the authors comment on prevention of recurrent cleft, and state that high dose (referring to studies using 4000µg) may be safe, although their actual recommendation is for 400-800µg. But is there a place at all for folic acid in the prevention of cleft lip and palate, a common congenital malformation of the oral and maxillofacial area, reportedly affecting 1/200 – 1/2500 births depending on ethnic and socio-economic status?  Studies of the role of folic acid are inconsistent, which led the researchers to carry out a meta-analysis on published case-control and cohort studies. Following an initial extraction of 1630 potentially relevant articles the final sample included 31 case-control and 6 cohort studies, covering 27045 clefts in the sample population of ~1,2m. The analyses involved distinction between folic acid administered on its own vs as a component of a multivitamin. Results showed that the multivitamin was superior, possibly because of a synergistic effect of other B-group vitamins such as B2 and B6. Timing appeared to be important and the authors concluded that particularly when initiated prior to conception, multivitamins have the potential to prevent 40% of cases of cleft lip and palate and 35% of cases of isolated cleft palate.  Respective odds ratios were 0.65 with CI of 0.55-0.80 for the former, and 0.69 with CI of 0.53-0.90 for the latter.  Studies included in the analysis were carried out between 1958 and 2016, with 19 from Europe, 9 from the USA, 6 from China and 3 from Thailand and Australia.  While paediatricians almost certainly counsel parents about the importance of folic acid following the birth of an infant with a neural tube defect, one wonders whether the same applies when presented with an infant with a cleft lip and/or palate. Interestingly a simple Google search of ‘prevention of cleft palate’ brings up multiple medical and lay recommendations for administration of folate. If paediatricians are indeed remiss in not making the same recommendation perhaps it is because affected infants (particularly with non-syndromic clefts) are quickly referred to the management team that involves orthodontists, maxillofacial surgeons, plastic surgeons, prosthodontists, otorhinolaryngologists, audiologists, speech therapists and the like. So if the paediatricians are remiss, hopefully one or more members of that extensive team will ensure that multivitamin or folate supplementation occurs prior to any future pregnancy.

Read more:

J Craniofac Surg 2018;doi: 10.1097/SCS.0000000000004488

Int J Epidemiol 2008; 37: 1041-58

Cleft Palate Craniofac J 2004; 41: 195-8

 

 

 

2018:19 MicroRNA (miRNA) profiles to detect/predict cerebral palsy in preterm infants

 

MicroRNAs are tiny non-coding RNA molecules that are endogenous physiological regulators of gene expression, either by repression of translation/transcription, or by activation of transcription. They play important roles in processes such as tumorigenesis, haematopoiesis, immune function, diabetes mellitus and progression of neurological diseases. In this regard, animal models of stroke have shown that miRNAs are regulated during progression and reperfusion of cerebral ischaemia, and presence in blood may be used as diagnostic markers. In a 2009 report these findings were applied to 'young' stroke patients aged between 18 and 49 years, showing that 157 different miRNAs could not only be identified in these patients, but also that different miRNAs were involved in different types of stroke (large vessel atherosclerosis, small vessel disease and cardio-embolism). Specific functions attributable to the various miRNAs were in relation to angiogenesis, endothelial/vascular function, erythropoiesis and neural function, and miRNAs involved in 'hypoxia conditions’ were also identified in the stroke patients. One-hundred thirty-eight of the 157 miRNAs were highly-expressed (i.e. up-regulated) and 19 were poorly expressed (down-regulated). While details of timing of sampling were not clear in the report, the authors state that the miRNAs were 'stably' expressed and in circulation even after several months from the onset of stroke.  It was also found that poor and good outcome patients showed different patterns of up- vs. down-regulation, with good outcome being associated with down-regulation. Furthermore, several miRNAs showed changes during progression of a disease. Along similar lines to the abovementioned research, investigators from Chicago's Northwestern University studied a small number of preterm infants <32wks and <1500g who were considered to be at risk for cerebral palsy (CP). Such infants are particularly susceptible to oligodendroglial injury, the pathological 'substrate' for CP, and several miRNAs are known to regulate oligodendroglial differentiation. Thirty-one blood samples were obtained, most at 1 month of age, but some were collected as early as 3 days and others as late as 2 months. Ultrasound examinations were carried out at 7-10 days, 1 month and 36 weeks corrected age for evidence of intraventricular haemorrhage (IVH).  Neonatal charts were reviewed, and patients were neurologically assessed at clinic visits up to18 months of age. Only two cases of CP were identified but the authors also assessed patients in terms of abnormal tone recorded at any of the visits. Such patients scored lower on Bayley scales up to 12 months of age, but these differences disappeared by 18 months.  Nevertheless examination for 752 different miRNAs in the infants produced interesting results. Peripheral expression of 23 miRNAs was different in IVH vs. non-IVH subjects, while differences in expression of 70 miRNAs were found in those with normal vs. abnormal tone. The authors thus propose that miRNAs are implicated in the development of CP i.e. post-translational regulation of gene expression by miRNAs may be important in the pathogenesis of motor dysfunction following preterm brain injury. One specific miRNA (miR-654), which was upregulated in subjects with abnormal tone, has been shown to decrease cell proliferation and migration, and to induce apoptosis. Future interventions could possibly be along the lines of miR654 antagonists being administered to block the potentially damaging effects of cerebral ischaemia. At this stage the data represent more in the way of interest and observation than of new modalities to diagnose, prevent or treat CP, but they nevertheless should alert one to the role/s of miRNAs in pathological situations. Also of interest is the observation in stroke patients that miRNA changes are stable over time. If this is confirmed it raises the question of whether there is a ‘therapeutic window’ that might be open for a while, possibly allowing for interventions that might reduce the effects of the ischemic insult.

Read more:

Phys Med Rehabil Int 2018; 5: 1148-53

PLoS One 2009 4: e7689

Cell 2004; 116: 281-97

 

 

 

2018:20 Update on fluid replacement in young patients with diabetic ketoacidosis (DKA)

 

In the mid 1980's attention was drawn to the risk of brain swelling in 0.5-1% of Type 1 diabetic children presenting with DKA and concerns were raised about the rate of fluid replacement in such patients. To paediatricians in South Africa the latter finding correlated to an extent with the situation in infants and children suffering from severe gastroenteritis-related hypertonic dehydration. Blood results in the two conditions were not too dissimilar in terms of elevated urea, sodium and even glucose levels, and explanations such as the generation of idiogenic osmoles within brain cells appeared to apply to both situations. Thus there was no surprise when treatment guidelines for DKA advocated slow rehydration (after bolus resuscitation), usually with isotonic saline.  These guidelines may well be about to undergo revision following a US-based multicentre study involving 13 sites and 1255 young patients aged up to 18 years of age who presented with DKA. Eligible participants had a blood glucose of >16.7mmol/l, venous pH of < 7.25, pCO2 of <15mmol/l and Glasgow Coma Scale (GCS) score of >11. Patients were randomised into four groups, each receiving either fast or slow fluid replacement, and normal or half-normal saline. The fast group was taken as being 10% dehydrated and the slow group as 5% dehydrated. All groups received a bolus of 10ml/Kg normal saline which could be repeated as deemed necessary on clinical grounds, but a second bolus was routinely administered to patients in the fast group. The fast groups received half their assumed deficit over 12 hours and the balance over 24 hours (all in addition to maintenance requirements). The slow groups did not receive the second standard bolus and were then maintained and rehydrated over 48 hours. Continuous insulin was administered at 0.1units/Kg per hour after the fluid bolus/es, and dextrose was added when the blood glucose was between 11 and 16.7mmol/l. Potassium was administered as chloride and phosphate or acetate and phosphate. The primary outcome was a decline in mental status with 2 GCS scores of <14 during treatment for DKA. Secondary outcomes were clinically apparent brain injury during treatment, short term memory impairment during treatment, and IQ measured after 2-6 months. The GCS fell to <14 in 48 of the 1389 episodes studied and clinical brain injury was observed in 12 (0.9%). There was one death. Risk of brain injury was independent of study group, but appeared to be greater with lower entry pH and pCO2, but there was a trend for better outcome in those treated with half-normal saline at the faster rate. The results are important but perhaps one should note that risk of cerebral oedema has been stated to be greater in younger patients on first presentation of their diabetes, whereas average age in this study was 11.5 years and 50% were known to be diabetic on presentation. It should also be noted that several other factors have been implicated in the genesis of cerebral swelling in DKA such as a) the condition not being due to intracellular swelling but to extracellular fluid accumulation following ischaemia-related reperfusion injury, endothelial damage and neuro-inflammation; and b) a specific role for insulin since animal studies have shown that fluid replacement plus insulin for DKA induces swelling whereas fluid alone does not. The latter implies insulin-driven anti-porter activity.

Read more:

New Engl J Med 2018; 378: 2275-87

J Clin Endocrin Metab 2000; 85: 509-13

http://PedsCCM.wustl.edu/FILE-CABINET/Metab/DKA-CEdema.html 1998

 

 

 

2018:21 Non-surgical correction of abnormally-shaped ears

 

In several summaries in this series the question has been raised as to whether the topic under discussion is relevant to developing- vs. developed communities and countries.  The topic here may certainly be subject to the same question. However it seems that abnormally-shaped ears occur fairly frequently and if identified early may respond well to non-surgical intervention. Studies going back to the 1970s have shown that children with prominent or abnormal ears suffer from various forms of abuse and humiliation, and that corrective surgery has social and psychological benefits.  However it has also been shown that early non-surgical intervention in the form of ‘splinting’/ ‘molding’ of the ear into a normal shape may correct the abnormality and avoid the need for surgery that would typically take place after several years once the ear has established near-final shape. The ability of the neonatal ear to respond to splinting appears to be related to the lack of elasticity (i.e. non-recoil) of the auricular cartilage.  This is the result of chondrocytes and intercellular materials assembled from collagen, elastin and proteoglycans being more-loosely bound during the first three days of life owing to increased levels of hyaluronic acid.  The latter is in turn the consequence of maternal oestrogens that are present in the neonate for the first few days of life.  Early activity of the extrinsic muscles of the ear also plays a role in shaping the ear after birth. In a 2008/9 systematic review of 20 papers on non-surgical correction it was shown that fair-to -excellent results were obtained in 70-100% of cases, with treatment outcome being poorest if applied later to older children. Researchers who have compared results of early treatment vs. outcome where parents refused molding, have shown that 33% of the untreated group corrected spontaneously vs. ‘excellent correction’ in 90% of the treated group.  The abnormalities that have been shown to respond are the Stahl’s ear (which has a cartilaginous bar at right angles to the helical rim), the ‘lop ear’ (which has a downward-folded helix), and the prominent/protruding ear (characterisied by an absent anti-helical fold and/or a deep conchal bowl).  While clinicians involved in the delivery of babies specifically examine the ears for abnormalities, these are typically related to features such as position of the ears, microtia and pre-auricular pits i.e. the previously mentioned abnormalities would typically not be sought, commented on or discussed with parents. In contrast to the latter approach and following a study at the Mayo Clinic, researchers in Canada demonstrated that after only 2 hours of training under a specialist otolaryngologist, a medical student was able to photograph and assess ears with a high degree of accuracy. In that study 9.3% of 333 newborns studied at<72 hours had abnormal ears. Sixteen were categorised as Stahl ears, 10 were prominent, 2 were cupped.  Most cases were among newborns of Asian or Caucasian descent.  The ethnic differences raise the issue of whether the abnormalities are perceived in the same way by different cultures, but that is something that the parents would/should decide. The ‘take home’ messages here are therefore that it does not require much for a team to implement a non-surgical approach to the management of the condition, one does not need highly qualified professionals to make the diagnosis of an abnormal ear, abnormalities that are amenable to non-surgical intervention should be diagnosed early, and that parents should be aware of treatment and outcome options.

Read more:

Int J Pediatr Otorhinolaryngol 2018; 110: 22-26

J Plast, Recon, Aesth Surg  2012; 65: 54-60

Br J Plast Surg 1992; 45: 97-100     

 

 

 

2018:22 Multilevel surgery (MLS) in children, adolescents and adults with cerebral palsy (CP)

 

Cerebral palsied children represent a common challenge for specialist and sub-specialist paediatricians, family doctors, families, schools and society. Optimization of function and integration into society are goals for all patients whose CP is amenable to intervention.  Non-operative management is the treatment of choice until 6 years of age while problems are ‘dynamic’ and in evolution. Beyond that time musculoskeletal pathology usually becomes ‘fixed’ in terms of contractures of muscle-tendon units, bony torsional deformities and painful subluxation of joints such as the hip.  Despite the stability of the CNS lesion, there is robust evidence for deterioration in gross motor function and walking ability during childhood.  Consequently orthopaedic surgery is recommended, ideally performed as MLS which has been defined as ≥4 separate orthopaedic procedures, at each anatomical level, in both lower limbs, during one operative session, combined with one extended period of rehabilitation.  Two relevant articles have been published on this topic in the past few months, one dealing with MLS in children, the other in adults. Both have added to the body of knowledge, particularly as they focus on longer term outcome which is measured in terms of pre- and post-interventional 3-dimensional gait analysis and conversion into a Gait Profile Score (GPS) which consists of 9 kinematic variables and is presented as a single number. A higher GPS indicates a higher deviation from typical gait. The study involving 5-16 year-olds was carried out in three collaborating sites and covered 231 patients with bilateral spastic CP. Mean age was 10yrs 7mths and mean follow-up period was 9yrs 1mth. Mean number of MLS procedures was 8 per child.  In 94% of patients the Gross Motor Functional Classification System (GMFCS) Score did not change, while in 13 the score improved by one level, and two deteriorated. Importantly however, at the time of final assessment 76.6% maintained their improvement in GPS. The study of adult patients is less robust in that the sample size is very small, involving only 20 patients from an original sample of 82. In adults MLS is carried out to improve gait, maintain the ability to walk and to decrease energy consumption. Pain, fatigue and osteoarthritis are particular factors in aging patients with CP. In this analysis surgery was performed at ≥17 years of age. Mean age at surgery was 24.8 years and average follow-up was for 10.9 years, with a significant improvement in GPS in all but 6 patients. Perhaps the most important message from the adult study is the need to minimize the loss of patients to follow up. This involves ‘transitioning’ patients more effectively from care delivered in facilities focusing on the young by paediatric and adolescent caregivers, to supportive management in appropriate facilities delivered by practitioners able to deal with issues affecting adults. 

Read more:

Int Orthopaed  2018; doi.org/10.1007/s00264-018-4023-7

Dev Med Child Neurol 2018; 60:88-93

Gait Posture 2009; 30: 265-9

 

 

 

2018:23 Drugs as triggers of chronic spontaneous urticaria (CSU)

 

Urticaria is a common cutaneous condition. The chronic form affects some 1% of the general population and has a significant impact on the quality of life.  CSU is defined as the recurrence of hives, with or without angioedema, on more than three days per week, persisting for at least 6 weeks.  It can occur at any time. Often, in discussion with a health professional, patients associate onset of CSU with exposure to foods, medications or particular activities.  From the health professional’s side it is not uncommon for a patient to respond adversely with urticaria following legitimate treatment for one or other common condition.  The association between drug and CSU, whether on the basis of patient self-report or doctor’s diagnosis, commonly results in a label of ‘drug allergy’ and advice to avoid the potential precipitating factor/s. In a recent report, doctors from South America carried out challenge tests to determine the concurrence between patient reports of reactions and clinical response to challenge with the presumed offending agent/s. The researchers had previously observed that the self-reported prevalence of inducible urticaria was 75% but the prevalence from positive challenge tests was only 36%, indicating that a large number of patients make unnecessary lifestyle restrictions. According to self-reports, non-steroidal anti-inflammatory drugs (NSAIDS) are most often associated with CSU exacerbations, being related in up to 50% of patients.  In a prospective 6-centre controlled study the researchers enrolled 245 patients with CSU aged >12 year,s and 127 healthy controls.  All participants were questioned about past drug reactions.  Thirty-seven percent of CSU patients and 23.6% of controls reported at least one adverse drug reaction.  NSAIDS and beta-lactam antibiotics were the most common offending agents in both groups, although the association was higher in the CSU group. Atopy and asthma were more frequent in the CSU group. All subjects with a self-report of adverse drug reaction restricted the ‘responsible’ drugs. However, of the total number of subjects in each group (245 and 137), only 32 in the CSU group had a positive response to challenge with NSAID or beta-lactam vs. 1 in the control group. The CSU group had a better agreement between self-report and positive challenge than the control subjects. The authors recommend that careful drug challenge using accepted protocols should be carried out in patients with a self-report of drug-related CSU in order to avoid unnecessary lifestyle and/or medical restrictions. While not specifically commented on, the article does not necessarily question an association between historical exposure and subsequent negative challenge test i.e. one may assume that there are cases in which exposure at a specific time did indeed trigger the CSU, but months later a challenge test may be negative.

Read more:

J Investig Allergol Clin Immunol 2018 doi: 10.18176/jiaci.0287

Allergol Immunopathol 2017; 45:573-8

J Allergy Clin Immunol Pract 2017; 5: 464-70    

 

 

 

2018:24 HIV in South Africa 2017/18

 

News reports from the 2018 International AIDS Conference in Amsterdam paint a positive picture of progress in combating HIV acquisition in South Africa and its neighbours. Incidence rates are quoted as having decreased by 50% in Namibia over 3 years, 44% over 6 years in Swaziland, 30% over 2.5 years in Botswana and 44% over 5 years in South Africa. Without doubt these are measures of success but it is nevertheless wise to keep a sense of perspective and explore how, why and where more work needs to be done. In this regard a number of databases and recent reports provide important information: for example a UNAIDS report on global HIV statistics for 2017 highlights a number of issues: globally 36.9 million people were living with HIV (PLWH); 1.8 million became newly-infected in 2017; and 940 000 died from AIDS-related illnesses. If one combines these statistics with figures released in South Africa by Statistics South Africa (SSA) and the Human Sciences Research Council (HSRC) one sees that the country’s current ‘contribution’ of 7.5-8.0 million PLWH’s to the global burden of disease is something in the region of a massive 20%. Perhaps on the plus-side, at 231 000 the country’s estimated rate of new infections to the global burden is ~13% and the number of South African AIDS-related deaths (115 167 in 2017/8) is similarly at ~13% of the world’s total i.e. while the country still contributes significantly to the global burden of disease the HIV acquisition and mortality rates are relatively lower, so clearly there have been successes in terms of prevention of new infection and in preventing death from AIDS-related illnesses. But particular concern is being expressed internationally and locally in terms of risk for those aged between 15 and 24 years. UNAIDS reports that every week 7000 young women between 15 and 24 years of age become infected with HIV, while the HSRC estimates that in 2017 almost 1300 females within the same age bracket were newly infected per week!. Within this age bracket, which contributes almost 40% of new infections, females were at double the risk vs. males, the proportion on antiretroviral treatment was lower at 39.9% vs lower and higher age groups, viral suppression rates were lower, and initiation of sexual activity had increased compared to previous years. While males were more likely to be sexually active, females were far more likely to have age-disparate sexual relationships than males (35.8% vs 1.5%). Paediatricians can to some extent bask in the glory of success achieved by a large number of colleagues who have advocated for and been actively involved in programmes that have almost eliminated the risk of mother-to-child transmission, but it is now time to become active again and identify areas in which members of departments of paediatrics and child health around the country can engage with this target group that includes adolescents who continue to put themselves and others at risk.

Read more;

2017 Global HIVStatistics. http://www.unaids.org/en/resources/fact-sheet

Stats SA Statistical release P0302.  www.statssa.gov.za

http://www.hsrc.ac.za/en/media-briefs/hiv-aids-stis-and-tb/sabssm-launch-2018  

 

 

2018:25 Do telomeres shorten in children whose fathers are absent?

 

Telomeres are non-coding repetitive DNA sequences at the end of each chromosome, their primary function being the maintenance of genomic stability. However DNA polymerase is unable to fully replicate the chromosomal ends, with the result that telomeres shorten with each cell division. Once telomeres are reduced to critical lengths the cell enters a state of replicative arrest i.e. senescence. For most people telomere length (TL) decreases with age and may be regarded as a mitotic clock that represents biological age. Various underlying factors influence TL and may therefore be indicative of accelerated biological aging including oxidative stress, DNA damage and genetics.  In adults TL has been shown to be negatively associated with smoking, mental illness, stress, obesity and poverty. In children studies have shown a relationship between TL and poverty, maternal depression and maltreatment. In infants shorter infant telomere length has been associated with a number of perinatal factors such as intrauterine growth restriction, preterm rupture of membranes, gestational diabetes and prenatal exposure to anti-retroviral therapy. In an article published in Pediatrics, children born between 1998 and 2000 and enrolled in the Fragile Families and Child Wellbeing Study (FFCWS). The cohort included 2420 children from 20 US cities. Interviews were conducted between 1 and 9 years of age and children’s salivary DNA samples were taken at 9 years of age. The outcome of interest was the interaction between TL in relation to loss of the father through death, incarceration, separation or divorce. Genetic influences related to polymorphisms in genes and consequential impact on serotonergic and dopaminergic signaling were also studied. At 9 years of age children with loss of their father had significantly shortened telomeres. Paternal death had the most significant effect, followed by incarceration, then separation or divorce.  Consequential financial stress played a role, but more so in the case of separation or divorce than for incarceration or death.  Boys had more shortening of TL than girls, as did those children with the genetic markers for greater serotonin transporter reactivity. This provides another example of the relationship between forms of stress, TL shortening and possibly more rapid biological aging.  However questions remain as to the extent to which TL is a time-sensitive predictor of a child’s long-term health and wellbeing and, possibly more importantly, the extent to which one can slow or even reverse the process in affected individuals.

Read more:

Pediatrics https://doi.org/10.1542/peds.2016-3245

Int J Epidemiol 2016; 45: 424-32

Am J Psychiatry 2010; 167: 509-27

 

 

 

2018:26 Is enough being done to decrease myopia progression in children?

 

Readers of this series of updates in paediatrics will recognise a strong trend towards the expanded roles and responsibilities of paediatricians as advocates for children and adolescents. The previous two summaries serve as examples, one focusing on HIV risk among South African adolescents, the other exploring an aspect of parental loss. Another issue that deserves attention is myopia in children, both its detection and its management.  To some extent the problem has been highlighted within the national debate around the introduction of National Health Insurance (NHI) and an initial focus on vulnerable groups. Results from screening of school children within NHI pilot sites during a test phase revealed that >100 000 children were at risk from dental, visual and hearing problems and required further intervention. The exact nature of the visual problems was not stated, but from other studies it is probably safe to say that myopia features significantly on the list.  Other counties have recognized the importance of the condition, although it is acknowledged that incidence and prevalence, certainly in children, differ between populations. For example studies have shown that 80-90% of Asian school children are myopic. In North America and Europe the prevalence in younger adults has increased to 40-60%, and it has been reported that by 2050 some 5 billion of the world’s population will be myopic, with another billion suffering from high myopia. This represents a 2.6-fold increase in number of affected people between 2010 and 2050.  The socioeconomic burden on individuals and countries is significant.  There are reports from South Africa covering children, adolescents and adults showing for example prevalence of 4% in children aged between 5 and 15 years but reaching almost 10% at 15 and increasing to around 35% in adults above 35 years of age.  But to what extent can one intervene, particularly in children, in order to slow the progression of the disease? This is an important question because treatment of myopia at an early stage before it increases to high myopia can significantly reduce the risk of retinal detachment, glaucoma, cataract and macular degeneration.  All these conditions are more prevalent in high myopia and can lead to blindness. The treatment to decrease the rate of progression spans three modalities; pharmacological, optical and behavioural, however there is little agreement as to which represents best practice or when to begin treatment. To gain some insight into how paediatric ophthalmologists manage the condition an international group of researchers surveyed >2000 specialists from various countries. Responses were received from 940, mostly from North America, Far East Asia and Europe, with the majority (57%) routinely treating to reduce myopia progression. While only two African countries (South Africa and Angola) are listed as having contributed, what is significant is that only 12% advocated treatment in order to decrease progression rate.  Overall responses indicated that treatment is most-commonly initiated when myopia increases at an average rate of ≥1 dioptre per year, but many initiated treatment on identification of the problem (mean age of 5.3 years), while others intervene at a threshold of 3.5 dioptres.  Fifty-four percent of respondents preferred to start with pharmacological treatment (mostly 0.01% atropine eye drops), followed by behavioural treatment (24.7%) and optical treatment e.g. spectacles or orthokeratology which includes reshaping of the cornea using contact lenses (21.3%). Behavioural therapy consisted mainly of limiting use of smartphones, less ‘screentime’ at close distances, more time outdoors and reading in natural illumination as much as possible. Respondents did not report on experience in terms of efficacy of these modalities in slowing progression and preventing the various complications.

Read more:

Graefes Arch Clin Exp Ophthalmol 2018;https://doi.org/10.1007/s00417-018-4078-6

Ophthalmology 2016; 123: 697-708

PLoS One 2017; 12(4): e0175921

 

 

 

2018:27 Associations between fractures and asthma status in children

 

Most of us reading the title of this summary would respond by thinking that if there is indeed a relationship then it would most likely be a consequence of asthma severity and a requirement for corticosteroid treatment, with the latter impacting on bone health and susceptibility to fractures.  Certainly inhaled corticosteroids are commonly prescribed to treat persistent asthma in children, and an association with decreased bone mineral density (BMD) has been described. However the effect of such steroids on fracture risk is perhaps less clear. For example a population-based study involving 279 boys with hand fractures showed an association with inhaled corticosteroids while a meta-analysis and systematic review of studies in children found no association between long-term treatment and fracture or reduced BMD.  On the other hand there is evidence that it might not be the steroids and it is the chronic inflammatory condition itself that increases the risk of fracture in children. In a UK-based study of children and adolescents managed by doctors in 683 general practices, asthma severity was more clearly associated with increased fracture risk than use of inhaled steroids. Most studies have relied on self- or parent-reported history of fractures. Adding to the literature is an Australian study which made use of two large databases. These databases were lodged within the Barwon Statistical Division in south-eastern Australia. One, the Barwon Asthma Study (BAS) captured data on asthma, allergies and medication use in children attending 91 primary schools. Around 20 000 parents were requested to complete the questionnaire and 76% responded. The second database, Geelong Osteoporosis Study Fracture register captured all radiologically-confirmed fractures sustained by residents within the Division.  The two databases were linked electronically, resulting in some of the BAS subjects being dropped from the analysis because of missing data. The final sample included 16 438 participants (50.5% males; age range 3.5-13.6 years), 823 of whom sustained fractures (i.e. 5.9% of the sample). Fractures occurred more commonly in boys (61.1% vs 38.9%).  Eighty-percent of the fractures involved the upper limb, mostly the wrist, followed by the hand and fingers, and then the radius, ulna or humerus.  No association was found with medication use, but it should be noted that no medication was used in 13 546 of the children (82.4%) while inhaled steroids were taken by only 3.3% and oral steroids by 1.7%.  Analysis was by logistic regression and the only parameters that clearly emerged as statistically significant were age above 9 years in boys, and recent wheeze or 1-3 episodes of wheeze within the previous 12 months, again in boys. The strength of the study lies in its large sample size and inclusion of only radiologically-diagnosed fractures, however there are weaknesses such as parental assessment of asthma severity, and lack of information regarding medication dosage. There was also a lack of association between fracture risk and ≥4 wheezy episodes. This suggests too few subjects in that category vs. the 1-3 episodes.  In contrast to this study a recent report from Canada that also included almost 20 000 children and adolescents found no significant association between fracture and current, recent or past use of inhaled corticosteroids, but did find a greater risk of fracture for systemic corticosteroid use (OR 1.17; 95% CI 1.04-1.33). Overall one should probably accept that inhaled steroids do not increase the risk of fracture while the same may not be true for oral treatment, and the jury may still be out on whether severity of the asthma itself, as an inflammatory condition, may increase the fracture risk.

Read more:

Journal of Paediatrics and Child Health 2018; 54: 855-60

JAMA Pediatrics 2018; 172: 57-64

BMJ Open 2015; 5: e008554-e

 

 

 

2018:28 Enteral feeding method affects fat and energy intake in preterm infants

 

Perhaps the above statement would not come as surprise to those with knowledge of chemistry and interactions between the milk flowing through a feeding tube and the chemical properties of that tube, but it is likely new information for many of us. The significance of the topic begins with the susceptibility of preterm neonates to respiratory problems and, when respiratory support is required, the preference for non-invasive ventilation in order to reduce risk of bronchopulmonary dysplasia.  However this ventilator support mode may be associated with gaseous distension of the gut, which in turn may be associated with real or perceived feeding intolerance. In many neonatal units this situation is managed by slower rather than bolus feeding. In a study of babies fed own mother’s milk it was found that the final delivery of energy and macronutrients to the infant is highly dependent on the feeding method: slower delivery leading to greater loss of nutrients. Continuous feeding resulted in a 40% loss of fat, 33% loss of calcium and 20% loss of phosphate. The cause of the loss is the adhesion of the nutrients to the tubing involved in the delivery. In cases where mother’s milk is not available, the preferred source is donor milk and, because such milk may be subjected to various steps (e.g. pasteurization) in the processing and preparation prior to delivery to the baby, researchers from Toronto studied changes to donor milk which was subjected to delivery as a bolus vs. delivery over 30 or 60 minute, or as a continuous infusion over 4 hours. Milk was delivered via a 35ml syringe and 5 French feeding tube.  Macronutrient and energy concentrations were assessed using a milk analyser. Pasteurization and preparation resulted in small increases in carbohydrate and protein concentrations, while fat concentration decreased slightly. Levels were unchanged if fed as a bolus but there were substantial changes as the delivery time increased. After 4 hours continuous delivery there was an energy reduction of 17.3kcal/dL which was mainly due to a reduction of 2.8g/dL in fat concentration.  Concentrations of carbohydrate and protein increased over the 4 hours, an effect the authors relate to the relative concentrating effect of fat loss. In energy terms, according to these results an exclusively donor fed neonate prescribed continuous feeds would lose ±28kcal/kg/day, leading to a significant energy deficit if one is not aware of the underlying dynamics. The loss of fat is also of concern, given the body’s requirement for the nutrient for brain growth.

Read more:

J Parenteral Enteral Nutr 2018. Doi:10.1002/jpen.1430

Nutrients 2015; 7: 423-442

Curr Opin Clin Nutr Metab Care 2015; 18: 269-75

 

 

 

2018:29 The gut microbiome in patients with intestinal failure

 

The concept of the gut microbiome is critically important for clinicians to accept and adopt; unfortunately such adoption brings with it the requirement to learn what for many of us is a new language. A recent article in the Journal of Enteral and Parenteral Nutrition, covering differences in the biome in patients with intestinal failure, who have or have not been successfully weaned from parenteral nutrition, clearly illustrates this point about the language. In the context of intestinal failure, which is the critical reduction of functional gut mass below the minimum needed to absorb nutrients and fluids and is due to a dysfunctional gut or one that has been shortened through surgical resection (e.g. post necrotizing enterocolitis), the authors showed that the condition was associated with reduced diversity and significant changes in the populations of Proteobacteria (40% vs. 9% in healthy controls) and Bacteroidetes (19% vs. 46% in healthy controls), with smaller differences between the populations of Firmicutes (~40% in both) and Actinobacteria (1% in both).  Proteobacteria are Gram negative organisms divided into 6 classes (Alpha to Epsilon and Zeta) and include organisms such as Bordetella, Neisseria, Escherichia and Helicobacter. Bacteroidetes are Gram negative non-sporeforming rods and the phylum includes the abundant Bacteroides. Firmicutes includes Lactobacillus, Streptococcus, Mycoplasma and Clostridium.  Actinobacteria are Gram-positive organisms that play an important role in biodegradation and recycling of organic matter. Jointly these organisms are important for functions such as fermentation and absorption of nutrients in the colon, development of the immune system, and intestinal mucosal growth and integrity. Certain groups such as Clostridia are important for protection against intestinal diseases whereas others (e.g. certain Enterobacteriaceae) may be proinflammatory and harmful. However the metabolic functional potential of the microbiome is enormous, with short-chain fatty acids (SCFAs) being the most important bacterial metabolites and end-products of fermentation of non-digestible dietary carbohydrates (i.e.fibre) by anaerobic bacteria. Acetate, butyrate and propionate represent 90-95% of the SCFAs produced in the colon. Proteins, glycoproteins and peptides from intestinal cell turnover also constitute fermentation substrate. The colon absorbs >95% of  SCFAs, contributing 5-10% of the body’s energy requirements, and also stimulating vascular flow and motility, sodium absorption, cell proliferation and differentiation, and promoting apoptosis of carcinogenic cells. Not all bacteria produce the same SCFAs, and their molar concentration and proportional ratio in the colon depend also on the type of fermentable substrate. In the adult colon microbial cell density is equivalent to 1-2kg of body weight. Measuring differences in the microbiome in various disease states such as intestinal failure and metabolic syndrome is beginning to guide diagnosis and treatment options ranging from pre- and probiotics to faecal implantation.

Read more:

J Parenteral Enteral Nutr 2018. Doi:10.1002/jpen.1423

Clinica Chimica Acta 2015; 451: 97-102

Nutrients 2013; 5: 829-51

 

 

 

2018:30 Does autonomic nervous system development in preterm neonates play a role in retinopathy?

 

This might seem to be a fairly esoteric question, but it is important given that 75 years after the discovery of the link between oxygen therapy and blindness we are still not clear as to which infants will and which will not develop retinopathy of prematurity (ROP). The condition remains as the leading cause of potentially-preventable childhood blindness in developed countries, and so the search continues for treatable predisposing factors apart from preventing prematurity (which remains elusive), and aggressively decreasing oxygen usage (which has been shown to lead to higher mortality and poor neurological outcomes).  In their quest to tease out differences between ROP-affected and unaffected very-low and extremely-low birthweight infants, researchers from Baylor University in Texas identified differences in exposure to and doses of dopamine and caffeine as independently influencing the outcome and severity of ROP. Other studies have also incriminated dopamine as an independent risk factor for development of ROP. Both dopamine and caffeine affect sympathetic and parasympathetic tone within the autonomic nervous system, and the Baylor researchers therefore proceeded to study heart rate variability (HRV) in ELBW neonates who subsequently did or did not develop ROP. Heart rate was captured and analysed using appropriate software over one hour during the first 5 days of life, within 5 days of initial ROP examination, and then at discharge for the unaffected infants or within 5 days of treatment for those with ROP. Comparison of the two groups showed similar gestational age at 25-26 weeks, birthweights of around 700gm, and oxygen requirements at around 40%. Intraventricular haemorrhage, bronchopulmonary dysplasia and sepsis occurred with equal frequency between the groups.  High frequency (HF) variability was defined as 0.15-0.4Hz and low frequency (LF) as 0.04-0.15Hz. Sympathetic/vagal balance was measured by LF/HF. There was a tendency for both HF and LF to decrease in the ROP affected infants between the second and third examinations, and a tendency for both to increase in the control group over the same period.  The authors conclude that autonomic nervous system activity, which is already low in preterm infants, is lower in those who develop ROP than in those who do not. They go further to explain that the choroid of the eye, which supplies the peripheral retina, is dependent on the autonomic nervous system. If the system is dysfunctional then blood and oxygen supply to the retina are impaired. The numbers of patients in the study was small and the results are not conclusive, so further studies are required. Meanwhile one must remain aware of the potential for harm when using drugs such as dopamine and caffeine, and recognise when certain neonates might be at higher risk than others. Strategic use of agents that are beneficial to autonomic function is an area for additional research.

Read more:

Journal of AAPOS 2018, doi.1016/j.jaapos.2018.03.015

Auton Neurosci 2007; 136:105-9

Ophthalmic Res 2000; 32: 249-56

 

 

 

2018:31 Genetics of ADHD in the 21st Century

 

Technological advances and large-scale collaborations have resulted in successful genetic investigations into neuropsychiatric disorders including attention deficit hyperactivity disorder (ADHD). Like most common medical conditions, ADHD is not explained by genes alone, with environmental risks also contributing. Recent data have contributed to what has been known for decades which is that ADHD is highly heritable with estimates in the range of 60-90%. However twin studies have shown a strong genetic overlap with other child psychopathology, most prominently with behavioural problems such as conduct disorder. Additionally, recent genetic studies have highlighted that ADHD is indeed a neurodevelopmental disorder which, like autism spectrum disorder (ASD), intellectual disability and other childhood neurodevelopmental disorders, typically has an early onset, shows a steady clinical course and is commonly accompanied by early cognitive deficits.  Recent studies have also shown a strong genetic overlap between ADHD and ASD, with monozygosity increasing risk of ADHD by almost 18 times if one twin has ASD (vs. 4.3-fold risk in dizygotic twins). These associations were most prominent for individuals with higher-functioning ASD rather than ASD with intellectual disability.  Importantly, until the publication of DSM-5 a diagnosis of the combination of ADHD in the presence of ASD was not recognised. Along different lines, while an  association between ADHD and lower IQ and intellectual disability has long been recognised, until recently there has been a reluctance to pursue genetic studies in such cases.  It has now been shown that most of the correlation between ADHD and intellectual disability (±91%) was explained by genetic factors.  This has meant that many ADHD research studies have not included the full IQ spectrum.  So what have the genetic studies shown? Some Mendelian disorders are associated with ADHD (e.g. tuberous sclerosis, fragile X syndrome), but mutations such as copy number variants have also shown overlaps between ADHD with both schizophrenia and Tourette syndrome. Genome-wide association studies (GWAS) have also shown overlaps with other disorders including bipolar disorder, anxiety disorder and major depressive disorder. Overall the studies are pointing in the direction of ADHD being a ‘spectrum’ or continuum that in many cases overlaps singly or jointly with other spectrum disorders such as ASD and intellectual disability.  In fact it is proposed that there is an overlap between ADHD, ASD/intellectual disability, and a group of conduct/communication/ learning/tic disorders, and that where the interactions occur there is an interplay with subsequent conditions such as ADHD/neurodevelopmental disorders and major depressive and later-onset neuropsychiatric disorders. It is therefore clear that a multitude of genetic variants exist and there is no common or rare gene variant/s that is/are specific for ADHD.

Read more:

AJP in Advance 2018, doi:10.1176/appi.ajp.2018.18040383

Am J Psychiatry 2018; 175: 15-27

J Child Psychol Psychiatry 2013; 54: 3-16

 

 

2018:32 Is weakly-acid gastro-oesophageal reflux (WAGOR) clinically significant?

 

In the context of reflux-related respiratory disorders in childhood it is perhaps logical to equate gastro-oesophageal reflux (GOR) with acid reflux and to regard the gastric acid as contributing significantly to respiratory effects and pathology. However, with the advent of pH/multiple intraluminal impedance monitoring in affected children it has been shown that WAGOR and even alkaline reflux are common in the paediatric population, and that both can induce respiratory symptoms such as persistent and/or nocturnal cough, wheezy bronchitis and asthma, recurrent lower respiratory infections (LRTI), apnoea and laryngospasm. The latter findings tie in with the oft-observed ineffectiveness of acid-suppressant treatments in this population. To further explore the relationships between respiratory symptoms and WAGOR vs. acidic gastro-oesophageal reflux (AGOR), researchers from Genoa, Italy performed a retrospective review of children with GOR (categorised as WAGOR or AGOR) in whom broncho-alveolar lavage (BAL) had been carried out during fibreoptic bronchoscopy.  Patients with “difficult-to-treat” respiratory symptoms were enrolled from the Pulmonary and Allergy Unit. Eligibility criteria included recurrent LRTI, persistent/recurrent cough ±wheeze, difficult-to-treat asthma, and recurrent or spasmodic croup in patients who had undergone 24-hour intraluminal monitoring and bronchoscopic BAL. Children with various conditions commonly associated with GOR were excluded (e.g. prematurity, swallowing disorders, structural gastrointestinal abnormalities). Twenty-four children were included, 13 categorised as WAGOR (mean age 4.9 years), and 11 as AGOR (mean age 8.5 years). All were >1 year of age and there were no differences in the most prevalent respiratory symptoms. Neutrophilic alveolitis and an elevated lipid-laden macrophage index were also observed in both groups. However, significantly higher epithelial cell numbers were seen in WAGOR subjects, suggesting greater airway damage.  The authors comment on non-acid intestinal contents that may contribute to the alveolitis e.g. pepsin and trypsin. While the former is most active at pH of 2.0, it is taken up by airway epithelial cells and, even at neutral pH, can alter the expression of multiple genes implicated in cell stress, toxicity and tissue damage.  Aspirated trypsin may contribute to the damage through its ability to disrupt the integrity of cellular tight junctions and integrins, thereby facilitating epithelial cell shedding. An additional thought is that higher pH in the ‘refluxate’ of WAGOR subjects may be associated with less-efficient protective cough and swallowing reflexes, thus favouring inhalations and greater opportunity for damage. Unfortunately, while the article provides food for thought and concern about possibly greater damage in WAGOR than in AGOR patients, there are no recommendations regarding mitigation of risk, for example by performing anti-reflux surgery in those who are eligible.

Read more:

Respir Med 2018; 143: 42-7

Pediatr Pulmonol 2017; 52: 669-74

Br J Clin Pharmacol 2015; 80: 200-8

 

 

 

 

2018:33 Adenovirus outbreak in a neonatal intensive care unit (NICU)

 

Mini-epidemics in NICUs in South Africa are not unusual and receive much in the way of media attention. All too often the publicity starts with a report of babies dying, followed by much finger-pointing and assignment of guilt to various parties, and eventually commenting on the offending organism/s and steps taken to control the outbreak. In contrast to the abovementioned sequence of events a recent article in the Journal of the American Academy of Ophthalmology chronicles the containment of an outbreak in the NICU of the Children’s Hospital of Philadelphia. While several of the steps such as almost-immediate DNA detection and typing would almost certainly take much longer in South African hospitals, particularly in the public sector, there is much to learn from the report in terms of proactivity, management and mitigation of risk. The report describes an outbreak of adenovirus in the NICU in which 23 primary cases were identified and 9 secondary cases (6 employees and 3 parents). Notably there were no secondary cases among NICU patients, most likely due to the early awareness of the problem and measures taken to prevent spread within the unit. A thorough review of the affected NICU population and procedures within the unit indicated that cases were widely distributed across the NICU within a variety of nursing teams. Key clinical features showed that 21 of the 23 had retinopathy and all had undergone recent ophthalmological examination during ‘ROP rounds’ carried out by ophthalmologists. The group of 23 represented 54% of neonates examined during the particular month, indicative of the highly infectious nature of the virus and its source which was identified as equipment carried into the unit by members of the specialist team.  All affected neonates had respiratory symptoms and 12 required increased respiratory support from pre-infection baseline, but only 5 developed pneumonia. Eleven had ocular symptoms. Four died but 3 had life-limiting conditions prior to the adenovirus infection. All affected adults (staff and parents) had conjunctivitis and had provided medical care or had direct contact with patients.  Virus was identified on a hand-held lens and indirect ophthalmoscope used in the examinations. Particularly impressive in the incident was the early identification of a problem when routine NICU surveillance identified a number of respiratory specimens testing positive for adenovirus, an unusual organism in the unit. This resulted in early identification of affected patients, staff and parents. Affected neonates were placed on ‘contact and droplet precautions’ to limit spread. This phase lasted for 14 days from the positive test result (the infective period for adenovirus) or longer until symptoms resolved. Unaffected patients who had eye examinations were subject to isolation precautions for 14 days from the date of the last examination. Affected patients’ rooms and common staff areas were bleach-cleaned. Affected staff members were given leave for 14 days from symptom onset.  Symptomatic family members were advised not to visit until symptoms resolved. In terms of the infected ophthalmological equipment, operators were subsequently required to wear gloves for each examination, bleach-cleaning was introduced for all equipment brought into the NICU, and this also occurred between patients. The latter involved addition of assistants to perform the cleaning, and also the provision of additional equipment so that workflow would not be interrupted. During a 3-month follow-up it was noted that the bleach resulted in some clouding of the hand-held lenses, but this was resolved by post-bleach washing with sterile water. The report details an ideal model for infection control. Several steps would not be possible in a resource constrained environment, but the principles apply and steps should be followed wherever possible.

Read more:

American Academy of Ophthalmology 2018. https://doi.org/10.1016/j.ophtha.2018.07.008

Pediatr Infect Dis J 2012; 31: 626-7

Am J Infect Control 2007; 35: S65-164

 

 

 

 

2018:34 Update on congenital hyperinsulinism

 

The old-timers among us commonly referred to congenital hyperinsulinism as nesidioblastosis which they believed described the range of histologic abnormalities within the pancreas such as islet cell enlargement, dysplasia, β-cells budding from ductal epithelium and islet cells in apposition to ducts. Nowadays, with greater understanding and insight, the condition is referred to as congenital hyperinsulinism (CHI), and the term nesidioblastosis is used to describe a rare form of acquired hyperinsulinism with β-cell hyperplasia found in adults as described in relation to Type 2 diabetes or after bariatric surgery.  Regarding CHI, two comprehensive reviews published over the past year bring clinicians up to date with current thoughts on aetiology, diagnosis, management and possible future directions.  Normally, insulin secretion is a highly-regulated process involving facilitated diffusion of glucose into β-cells where it is metabolised and results in the formation of ATP, causing ATP-sensitive K+ (KATP) channels in the β-cell membrane to close and preventing the outward flux of K+ ions. Apart from cases associated with syndromes such as Beckwith-Wiedemann and Turner syndrome, there are genetic forms (in around 40% of cases, and at this time related to mutations in 12 known genes), and non-genetic forms of CHI (seen for example in relation to intrauterine growth retardation and in cases of perinatal asphyxia). The majority of genetic forms involve defects in the KATP channel gene and in the pore-forming subunits of the channel. These are inactivating mutations which act by reducing the ability of β-cells to efflux potassium ions through the channels, and result in inappropriate membrane polarization and unregulated Ca+ entry. The latter triggers insulin release without any reference or coupling to plasma glucose levels. Other genetic mutations involve nutrient metabolism within β-cells and an increase in the ATP:ADP ratio, which then results in the closing of the KATP channels.  There are also mutations that cause unregulated Ca++ entry into the β-cells, thereby triggering insulin secretion.  Genetic forms do not have consistent disease trajectories, and while homozygous and compound heterozygous mutations are likely to suggest permanent forms of CHI, recent experience is that there is a reduction in severity over time, even in those with severe forms of the disease.  In some there is a switch from hypo- to hyperglycaemia and Type 2 diabetes over time. In this rapidly-evolving field it is also likely that at this time some of the ‘non-genetic forms’ represent genetic forms that have not yet been identified.  Some genetic forms of CHI are associated with focal CHI, while others diffusely involve the pancreas.  Emergency treatment in the neonate to prevent brain-damaging hypoglycaemia involves glucose infusion and drugs such as glucagon. Subsequent intervention may involve diazoxide and octreotide, both of which have actions relating back to the KATP channels and inhibit insulin release. The increasing awareness that severity of early disease does not correlate all that well with subsequent hyperinsulinism suggests that one should perhaps place a greater reliance on currently-available drug treatments and await the development of new drugs, rather than resort early to various degrees of pancreatectomy and risk of consequent diabetes in the patient.   

Read more:

Diabetic Med 2018. doi;10.1111/dme.13823

J Clin Res Pediatr Endocrinol 2017; 9(Suppl 2): 69-87

Orphanet J Rare Dis 2016 doi: 10.1186/s13023-016-0547-3

 

 

 

2018:35 What about the microbiome in babies born preterm?

 

The role of the microbiome in human health and disease continues to receive attention, partly as a result of the much-anticipated potential to identify and intervene therapeutically in situations in which there are significant deviations from what is regarded as normal or good. A recent article in the Lancet on the short- and long-term effects of caesarean section (c/s) on mothers and babies represents a case in point. The article covers ‘mechanical’ risks for the mother such as haemorrhage at the time of the section and subsequent risks such as uterine rupture. Regarding the offspring, there are the usual concerns that allergy, atopy and asthma rates are increased and there is reduced intestinal microbiome diversity (which might be a factor in the evolution of the latter conditions). Along similar lines, surely one must also consider the fate of preterm infants, many of whom will spend weeks if not months exposed to various antibiotics at the one extreme and risk of nosocomial infection with resistant organisms at the other.  What does their biome look like during the hospital stay, post-discharge and during childhood, and what are the long-term consequences?  Many such infants are also delivered by c/s, so perhaps have the additional risk of being denied the benefits of colonization with maternal organisms during the birth process.  One study of the microbiome in 11 ELBW infants produced evidence of fungi (Candida and Clavispora species) and environmental molds, but also (in 71% of faecal samples) ribosomal sequences corresponding to Trichinella. These findings were in addition to low diversity in the bacterial community, but types of bacteria known to cause invasive disease were nevertheless dominant. Another study showed that prenatal exposure to chorioamnionitis resulted in a higher abundance of potentially pathogenic bacteria (irrespective of exposure to postnatal antibiotics). Perhaps most work has been done in relation to necrotising enterocolitis (NEC) in this highly-susceptible population, with biome changes observed before and after the diagnosis. In a study that involved several hospitals there were differences in the microbiota between NEC-positive and negative populations and also between hospitals. In this study mode of delivery did not influence the results. Several studies have shown lower proportions of beneficial microbes, specifically Bifidobacterium and Lactobacillus, with the bacterial community shifting dramatically over a matter of days.  An overgrowth of Proteobacteria may precede the condition but is also found in healthy infants so is not a good predictor of NEC. The cause of NEC is not a particular microbe but rather a dysfunction of the gut microbiota as a whole, a dysfunction that has a significant impact on the shaping of the immune system.  A current line of research involves oligosaccharides, the second most-abundant carbohydrate source in human milk, but is for nutrition of the microbes rather than for the baby. Oligosaccharides also coat the lining of the gut, making it more difficult for microbes to invade. Lactoferrin also plays a role, suppressing bacterial growth and triggering microbial death by binding to lipopolysaccharides. Studies involving the supplementation of neonatal diets with the latter two constituents of breast milk are underway. Many neonatal units believe in probiotics as a strategy to reduce risk of NEC and there now appears to be a case for adding oligosaccharides and lactoferrin to the probiotic regimen. Probiotic research has also produced a particular strain of Bifidobacterium that is more efficient at consuming the oligosaccharides and appears to extend the effect of the probiotic on the biome: whereas the effect of probiotics is usually lost within days of discontinuation, Bifidobacterium longum infantis is found at least 30 days after cessation. As shown above, most of the research covers early life of the ex-preterm population; how all this relates to a relationship between early disruption of the biome and subsequent heath and disease has yet to be clearly established.

Read more:

Nature 2018; 555; S19-20

Lancet 2018; 392: 1349-57

J Matern Fetal Neonatal Med 2016; 29: 99-105

   

 

 

 

2018:36 Access to marijuana and advocacy role for paediatricians

 

Much has happened since this topic was covered in summary 1732 one year ago. At that time the discussion went around legalization of medical marijuana, which basically means access to any form of the drug but for medical indications which seem to range from insomnia to incurable cancer. At that time a figure of $3billion was quoted as the size of the global marijuana market, with the potential to grow to $56billion. Now, given the efforts around the world to legalise not only medicinal but also recreational marijuana, the global market is already several times higher than the abovementioned $3billion, with projections to increase to ~$150billion by 2025. This genie was probably partially out of the bottle many years ago with slim chances of putting it back, and with South Africa’s recent approval of private use of marijuana we can anticipate relatively easy access to commercially-available variants of the drug, legally for adults and most likely illegally for adolescents and even for children.  We therefore need to be cognizant of research that not only assists us in determining which children and adolescents are potential abusers, but also the extent to which marijuana use may have long-term consequences. It is therefore appropriate that researchers in Canada (which is celebrating the economic opportunities now possible through the domestic culture and wide distribution of marijuana) are reviewing the impact on adolescents. Previous meta-analyses have linked cannabis use to poor cognition in the domains of learning, memory, attention and working memory. Some of the impairments persist whereas others recover as consumption changes. Imaging studies have also shown disturbing changes e.g. smaller prefrontal cortex and left hippocampal volumes. The study under review involved 3826 seventh-grade students from 31 schools in Montreal who were enrolled in 2012/2013 and assessed annually for four years. The assessments were computer-based and focused on recall memory, perceptual reasoning, inhibition and working memory. Results showed that certain adolescents are more vulnerable and more likely to abuse than others i.e. those with lower working memory, perceptual reasoning and inhibitory control. With increased use within a given year there were neuroplastic (concurrent but reversible) effects on memory and perceptual reasoning. This ties in with research that shows that the ability to encode and retrieve memory is regulated within the medial-temporal lobe, including the hippocampus, which is rich in cannabinoid receptors. Use in one year that was associated with impairment one year later was consistent with neurotoxic (i.e. lasting) effects in two domains of cognition; inhibitory control and working memory. Other studies have also reported long-term effects of early onset and persistent cannabis use on measures of executive functioning, verbal IQ and decision-making. The importance of early-onset abuse was also found in the Montreal study, with greater impairment in the area of perceptual reasoning and an additive effect if abuse persists. Vigilance and advocacy are necessary as our country follows global trends in access to marijuana.

Read more:

AJP in Advance, doi:10.1176/appi.ajp.2018.18020202

JAMA Psychiatry 2015; 72: 994-1001

Proc Natl Acad Sci USA 2012; 109: E2657-E2664

 

 

 

 

2018:37 Is autism related to prenatal exposure to drugs affecting neurotransmitter systems?

 

This sounds like a reasonable question which researchers from Israel addressed in a review of a database covering 1405 cases of autism and 94 844 control subjects.  Data were sourced from a large health maintenance organization (HMO) and the sample included 35.6% of the children born in Israel between 1997 and 2007. Mean age at end of follow-up was 11.6 years. Medicines received by women during pregnancy were placed into 55 categories that targeted neurotransmitter systems. The study did not include over-the-counter medicines. The hazard ratio of the disorder being linked to antagonists of the neuronal nicotinic acetylcholine α receptor was significant at 12.94 after adjustment for factors such as maternal condition/reason for the medication (95%CI: 1.35 – 124.25; p=0.03) and remained significant in all sensitivity analyses. Drugs in this category include anti-epileptics and antagonists of α7 nAChR, a wide group represented by conotoxin and bungarotoxin. The Alzheimer’s disease drug memantine, acting as an antagonist in its side pathway belongs in this group. However from the very wide confidence intervals shown for the category (1.35-124.25) and the authors’ comment that the number of associations between drug and autism was very small, the clinical significance of the association is questionable. In fact, the authors state that most associations were modified or nullified when adjusted for maternal characteristics. At the opposite end of the spectrum were cannabinoid receptor agonists/fatty acid amide hydrolase inhibitors, muscarinic receptor 2 agonists, opioid receptor κ and ε agonists and α2c-adrenergic agonists, all of which were associated with lower estimates of autism risk, but again, all models became statistically insignificant in sensitivity analyses, casting doubt on the associations. Notably, while there appeared to be a protective effect of cannabinoid receptor agonists, exposure to marijuana was not measured, so one should not draw the conclusion that marijuana is protective against autism. Overall this large study that involved almost 100 000 subjects and is headed with a title that actually suggests that there is indeed an association between exposure to the 55 categories of medications and autism, provides very weak support for the association. The study included most of the medications targeting neurotransmitter systems including antidepressants and antipsychotics.

Read more:

JAMA Psychiatry 2018; doi:10.1001/jamapsychiatry.2018.2728

JAMA 2013; 309:1696-1703

BMJ 2013; 346: f2059

 

 

2018:38 Urinary hepcidin for diagnosing iron deficiency and iron deficiency anaemia (IDA)

 

Summaries 1524 and 1525 in this series discussed hepcidin as the body’s ‘master iron regulator,’ but with the relationship between the hormone and iron levels seemingly working in the ‘wrong’ directions. For example, inactivation of the hepcidin gene (or mutations on chromosome 19 in humans) results in severe iron overload and haemochromatosis, while situations in which there is overexpression of the peptide result in severe anaemia unresponsive to iron. In the sports environment there has been interest in measuring (lower) hepcidin levels in athletes who might be taking erythropoietin to stimulate red cell mass and oxygen carrying capacity, but confounding effects of altitude or coexisting inflammation on hepcidin production, and also lack of data on what represents normal hepcidin levels, have presented problems for anti-doping agencies.  With IDA being the most common form of anaemia, affecting around one-third of the world’s population and being especially prevalent in children and adolescents, clinicians are constantly on the lookout for non-invasive tests that would assist with diagnosis of this very common condition. In this regard Indian researchers recently published a report on serum and urinary hepcidin levels in 30 children with IDA aged between 6 months and 5 years, and 30 normal controls. In order to remove potential confounders for hepcidin levels, subjects with acute infections, chronic illnesses, haematinic or micronutrient treatment, thallasaemia trait, obesity or malignancy were excluded. The authors found that diagnostic accuracy of serum hepcidin levels was not as good as that of urinary hormone levels (area under the ROC for serum 0.59: 95%CI 0.44-0.74; p0.23 vs 0.70 for urine: 95%CI 0.57-0.84; p0.007). With a hepcidin cut-off level of <2.67ng/ml the sensitivity was 86.7%, specificity 53.3%, PPV 65% and NPV  80%. The authors refer to another study reported from Egypt in 2011 which also found urinary hepcidin levels to be of value, and in fact had more-impressive results. In that study cases were categorised as iron deficient, iron depleted, iron-deficient erythropoiesis and iron-deficiency anaemia with significant reductions observed in accordance with disease progression. Urinary hepcidin correlated positively with haemoglobin, MCV, HCT, serum iron and ferritin, and correlated negatively with transferrin and TIBC. Their conclusion was that detection by means of urinary hepcidin was simple and non-invasive, and could detect iron deficiency even before the appearance of haematological effects.

Read more:

J Pediatr Hematol Oncol 2018 www.jpho-online.com

Br J Pharmacol 2012; 165: 1306-15

Ital J Pediatr 2011 doi 10.1186/1824-7288-37-37

 

 

2018:39 Adverse effects of cannabis abuse or dependence during pregnancy

 

Summary 1837 referred to an inverse relationship between presence of autism spectrum disorder (ASD) in children and maternal exposure to cannabinoid receptor agonists during pregnancy, but the latter did not appear to actually include cannabis, perhaps leaving open the question of whether cannabis may be ‘protective’ against ASD. This open question has been addressed to some extent in a recent analysis of >12.5m births in the US of which almost 67 000 were offspring of women who self-reported cannabis abuse or dependence during pregnancy.  The prevalence of abuse or dependence rose from 3.22/1000 deliveries in 1999 (beginning of the study period) to 8.5/1000 at the end (2013). This figure is substantially lower than North American estimates of cannabis use during pregnancy (ranging from 3.3% to 20.5%), but it should be noted that the study being reviewed here focused on abuse or dependence, whereas other studies have included any use/exposure during pregnancy. In addition to cannabis abuse or dependence the authors included data on maternal age, race, hospital location, type of insurance, income, multiple births, hypertension, diabetes, smoking, alcohol consumption and other illicit drug use. Adjustment was made for the latter factors in order to isolate the effect/s of cannabis. Women reporting dependence or abuse were more likely to be under 25, African-American, within the lowest two income quartiles, have Medicaid as their source of medical insurance, and to utilise urban teaching hospitals. To the extent that the authors were able to relate adverse effects/events to the drug, they found abuse or dependence to be related to preterm premature rupture of membranes (OR 1.17; 95%CI 1.35-1.98), a maternal hospital stay of >7 days (OR 1.17; 95%CI 1.11-1.23) and intrauterine foetal death (OR 1.50; 95%CI 1.39-1.62). Neonates had a higher risk of prematurity (OR 1.40; 95%CI 1.36-1.43) and growth restriction (OR 1.35; 95%CI 1.30-1.41). These results are broadly consistent with results of other cohort studies. Other studies that have gone beyond the neonatal period and measured long-term effects have found significant impairments in neurocognitive functioning, with evidence of deficits in abstract and visual reasoning, tasks related to executive functioning, and academic tasks such as reading and spelling. Harping back to the theme that the world is becoming much more accepting of the drug for recreational and ‘medicinal’ use, and noting that cannabis is apparently being touted for treatment of morning sickness, particularly hyperemesis gravidarum, note should be taken of the body of maternal, neonatal, paediatric and adolescent evidence indicating that we are probably not dealing with a product that is completely benign.

Read more:

J Obstet Gynaecol Can 2018; https://doi.org/10.1016/j.jogc.2018.09.009

Am J Obstet Gynecol 2015; 213: 201e1-10

Complement Ther Clin Pract 2006; 12: 27-33

 

 

 

2018:40 Inguinal hernia repair in premature infants

 

Most likely the most stimulating aspect of this summary will be a querying of the statement in  a reference from Montreal that inguinal hernia repair (IHR) remains the most common procedure in paediatric surgery. A quick literature search brings up a review that lists circumcision and appendicectomy as the most frequent procedures, while another lists IHR below circumcision and does not mention appendicectomy. To some extent the lists depend on what is included because factors such as age range and ethnicity of the population, surgical specialty, country and medically-insured status are likely to influence the numbers.  For example consider the differing frequencies of procedures such as circumcision and placement of grommets in different groups in South Africa. But to return to IHR in general and in infants born prematurely: A recent paper from Korea notes that incidence of inguinal hernia is as high as 30% in infants born preterm, which is many times higher than that of the general paediatric population (1-4.4%). In this high-risk group there is greater likelihood of incarceration, bilateral hernias and also of post-operative complications and recurrence. Apart from the ‘degree’ of prematurity, other factors that have been found to play a role include duration of mechanical ventilation, exposure to high-frequency oscillation and exposure to postnatal dexamethasone therapy. Early achievement of full enteral feeds may also play a role.  For the surgeon faced with inguinal hernia/s in VLBW and ELBW infants, the immediate questions are when to operate (since delay may increase the risk of incarceration) and, if unilateral, whether to explore the contralateral side because of the prevalence of bilateral hernias in preterms. The Korean researchers studied 90 affected preterm infants. Eighty-percent were male, gestational age 30.9±3.4 weeks, mean birthweight 1.36kg (range 0.43 3.2kg), age at diagnosis 12-120 days, age at surgery 19-406 days. Eleven had preoperative incarceration, 13 had postoperative complications and 11 recurred. Early repair was defined as within 7 days of diagnosis, while others were repaired >7 days after diagnosis or post-discharge. Incarceration and post-operative complication rates were not different between the early and later groups, but the recurrence rate was higher in the early-repair group. The latter finding is at odds with one from an Indian study that did not encounter the higher recurrence rate. While not specifically studied by the Koreans, they do however conclude that because of the higher rate of bilateral hernias (either synchronous or meta-synchronous i.e. in the same infant but at a later date), surgeons should consider contralateral exploration in infants presenting unilaterally. Whether IHR is or is not the commonest surgical procedure it is indeed common. Practitioners responsible for this group of neonates must recognise the risk during the hospital stay and alert caregivers to the possibility of post-discharge complications in treated infants, as well as late presentation in infants free of the condition during the admission.

Read more:

J Pediatr Surg 2018; 53: 2155-9

J Matern Fetal Neonatal Med 2017; 30: 2457-60

J Pediatr Surg 2006; 41: 1818-21   

 

Articles from other years 

Index | 2019 | 2017 |  2016 |  2015 |  2014

Copyright © University of Pretoria 2024. All rights reserved.

COVID-19 Corona Virus South African Resource Portal

To contact the University during the COVID-19 lockdown, please send an email to [email protected]

FAQ's Email Us Virtual Campus Share Cookie Preferences