Inicio Blog Página 50

Developing medical tests that improve patient outcomes

0

A course developed by the EFLM Working Group on Test Evaluation on the progress
of medical tests improving patient outcomes for laboratory professionals,
clinicians, biomarker researchers and healthcare companies.

Secure your space at a Course delivered by the EFLM Test Evaluation Working Group!

Keynote Speakers: Rita Horvath, Patrick Bossuyt , Sverre Sandberg, Sally Lord, Christoph Ebert, Phillip J Monaghan, Andrew St John, and others

Laboratory medicine has a poor record bringing new tests to market in a timely and effective way.

Evidence based laboratory medicine (EBLM) provides the underlying principles for how a new biomarker should go through the test evaluation process but these principles alone do not appear to have guided better evaluation. This course aims to address that gap by extending the principles of EBLM to provide some practical tools for the key processes of test evaluation.

Key Features of 2½ day course:

  • The course will be interactive and talks by keynote speakers will be followed by practical assignments.
  • Test evaluation – definitions and basic concepts
  • Tools to conduct test evaluation including:
  • Talks to highlight the differing perspectives and experience of key stakeholders and experts in test evaluation
  • Practical assignments to understand the use of test evaluation tools.

Target Audience:

  • Qualified laboratory professionals
  • Researchers involved in biomarker development and test evaluation
  • Clinicians involved in biomarker use and evaluation in clinical practice
  • Healthcare company and regulatory representatives

Scientific ProgrammeNew EFLM course

 

Listado de emisiones anteriores

No se encontraron entradas.

Practical Guidance on Primary Adrenal Insufficiency

0

A clinical practice guideline on primary adrenal insufficiency (PAI) released by the Endocrine Society and co-sponsored by AACC calls for diagnostic testing to exclude PAI in acutely ill patients with otherwise unexplained symptoms or signs suggestive of PAI. The guideline also recommends corticotropin stimulation testing to confirm PAI diagnosis—provided that the patient is able to undergo such a test—and it details other optimal diagnostic tests.

“Diagnosis and Treatment of Primary Adrenal Insufficiency: An Endocrine Society Clinical Practice Guideline” has been published by theJournal of Clinical Endocrinology & Metabolism (JCEM).

Andrew Don-Wauchope, MB.BCh, MD, FRCP Edin, FCPath, FRCPath, FRCPC, served as AACC’s representative on the guideline committee. He is an associate professor of pathology and molecular medicine at McMaster University in Hamilton, Ontario, Canada.

PAI, also referred to as Addison’s disease, arises when a patient doesn’t produce enough cortisol, a hormone necessary for maintaining functions such as the body’s immune system and its response to stress, cardiovascular function, blood pressure, and the ability to convert food into energy. Individuals who suffer from this condition can experience gastrointestinal problems, fatigue, muscle weakness, and weight loss.

“Diagnosing primary adrenal insufficiency remains challenging because many of the symptoms are associated with a variety of health conditions,” said Stefan R. Bornstein, MD, PhD, of the Universitätsklinikum in Dresden, Germany, and King’s College in London, U.K., and chair of the task force that authored the guideline, in a statement from AACC.

Bornstein emphasizes that immediate treatment of symptoms is key, as delaying this step can raise the chances of a patient dying from this condition. This is necessary even if someone is awaiting a test to confirm a diagnosis, he said.

The guideline recommends intravenous corticotropin stimulation testing with a 250 µg dose for adults and children older than age 2 over other tests to diagnose PAI. Peak cortisol levels <500 nmol/L (18 µg/dL) at 30 or 60 minutes indicate adrenal insufficiency. When corticotropin stimulation testing is not feasible, the authors suggest using a morning cortisol <140 nmol/L (5 µg/dL) in combination with adrenocorticotropic hormone (ACTH) testing as a preliminary diagnostic until confirmatory testing can be done.

“The guideline discusses the recommended cut-point for the 30- or 60-minute sample as being dependent on the cortisol method. For simplicity, it still states the traditional 500 nmol/L (18µ/dl) as this is how the systematic review analyzed the data,” Don-Wauchope told. “Even though cortisol has a good degree of standardization there are still significant bias between methods. The guideline provides references to the studies that demonstrate that there should be differences in cut-point based on the laboratory method being used. Each laboratory should consider the typical bias for its method against other methods as assessed by a proficiency testing scheme.”

Don-Wauchope added that laboratories “might also need to consider the typical performance of their instrument(s). This information should be available so that they can provide relevant information to the physicians interpreting the synacthen (ACTH) test results.

Blood tests to measure renin and aldosterone hormones should be part of the diagnostic process. Glucocorticoid replacement therapy is recommended for all patients with confirmed cases of PAI.

For patients with confirmed PAI and aldosterone deficiency, the guidelines suggested mineralocorticoid replacement therapy with fludrocortisone, at a starting dose of 50 –100 g for adults. “Anyone receiving this therapy should be monitored by testing blood electrolyte levels and checking for symptoms like salt craving, light-headedness, blood pressure changes and swelling of the legs and feet,” according to AACC’s statement, which summarized the guidelines’ main recommendations.

The guideline’s section on the perspectives and demand for future research also has implications for laboratories, Don-Wauchope toldCLN Stat. “There is increasing interest in salivary testing for cortisol as an alternative test to serum cortisol.” This section of the guideline also mentions using more specific methodologies such as liquid chromatography-tandem mass spectrometry to analyze cortisol.

The laboratory community needs to continue to work at improving its assays, and the advice it provides around the interpretation, Don-Wauchope noted. “This is all part of our quality process. Specifically for cortisol we should be looking at reference method targets for our proficiency testing as this would better enable bias assessment.”

Source: CLN Stat – AACC

Ten Ways to Improve the Quality of Send-out Testing

0

During the past decade, send-out test volumes have grown steadily in many laboratories. This trend can be attributed to increases in the number of available tests, especially genetic tests, as well as proprietary tests that must be sent to specific laboratories. These tests involve more steps and more manual processes than in-house tests, thereby increasing the risk of errors that can cause patient harm (Table 1).

Table 1
Send-out Test Errors That Can Harm Patients
Preanalytic Incorrect test order placed by physician results in delay in receiving results.
Preanalytic Delay in sending out the test leads to delays in diagnosis or monitoring.
Postanalytic Delay in entering test result after receiving it from the send-out lab.
Postanalytic Failure of physician to retrieve results leads to missed or delayed diagnosis.
Postanalytic Data entry error occurs when manually entering the result leads to physician misinterpretation of result.
Postanalytic Physician misinterprets a long, complex send-out report that she has little experience in reading.

 

While this growth trend is not likely to turn around any time soon, there are interventions that prudent laboratory managers can implement to improve the quality of send-out tests and decrease the risk of errors and patient harm. The interventions fall into two phases of the testing process: preanalytic and postanalytic.

Preanalytical Interventions

There are multiple interventions in the preanalytic phase that laboratory managers can put in place, some even before the test is ordered, to improve quality (Table 2). Having computer interfaces with the reference labs reduces transcription and interpretation errors and provides greater clarity in the process. Similarly, using as few outside labs as possible avoids errors associated with variations in each lab’s processing requirements and reduces the complexity of send-out testing.

Labs also can reduce errors by improving how send-out related phone calls are handled. The goal is to have a low rate of abandoned calls and a high rate of questions asked by physicians or nurses answered without transferring calls. Imagine a provider’s frustration because he or she needs to ask a laboratory send-out expert for help but can’t get through on the phone. One way to accomplish this goal is by implementing a dedicated phone center that includes knowledgeable staff.

Another place to implement interventions is in the specimen processing area. The longer specimens are delayed in the processing area, the more likely they will be misplaced or handled inappropriately. We found in our own evaluation of send-outs that the lab used different refrigerators and freezers for holding send-out specimens before transport. This increased the likelihood of delays due to mishandling or forgetting specimens. In the worst cases, tests had to be canceled and specimens had to be recollected from patients. When we simplified the send-out process, we shortened the time it took to transport the specimen from our lab to the reference lab, which led to better overall turnaround times and fewer canceled tests.

Perhaps the best way to improve preanalytical process quality is active test utilization management. This might include developing a lab formulary (1), hiring a genetic counselor to review all genetic orders (2), or having a doctoral-level consultant on-call to help with problem cases. In addition, defining as many tests as possible in the laboratory information system (LIS) will simplify send-out processes and decrease exception handling.

The final preanalytic intervention is the one most labs are most familiar with: adding high-volume send-out tests to the lab’s menu. Laboratory managers should regularly review send-out orders to determine if it would make sense to perform some tests in-house rather than send them out.

Postanalytic Interventions

Laboratory managers also can make a difference in the postanalytical phase of send-out testing (Table 2). Here, interventions center on efficient and reliable methods to get results back to providers and patients. Entering test results in the LIS can be improved in several ways, including electronically interfacing as many results as possible, scanning the results that cannot be interfaced, and implementing special processes for manual data entry.

Table 2
10 Ways to Improve Quality of Send-out Tests
Preanalytic Postanalytic
  1. Establish computer interfaces to major reference labs.
  2. Consolidate to as few reference labs as possible.
  3. Establish a call center to answer provider questions.
  4. Get the specimens out the door as quickly as possible.
  5. Implement active test utilization management.
  6. Define as many tests as possible in the LIS.
  7. Adjust in-house test menu as needed.
  1. Establish computer interfaces as much as possible.
  2. Have a system to ensure physician acknowledgement of results.
  3. Develop quality metrics to ensure the other nine areas are in control.

 

Electronic interfacing is the preferred way to transfer results from the send-out lab to the LIS. Interfacing can be difficult, time-consuming, and expensive, and some esoteric labs do not have an interfacing option; however, a growing number of companies specialize in implementing electronic communication between laboratories. When interfacing is not feasible, installing a scanning system to download the image of the result directly into both the LIS and the electronic medical record (EMR) is a second option. This is especially important for complex, lengthy reports.

Typically, there’s no way to avoid some manual data entry of send-out test results. Labs can reduce the chance of manual data entry errors by instituting redundant data entry and double-checking with accountability. Redundant data entry involves two people entering the same result independently and having the result verified only if these independent entries match. With double-checking, a second person verifies what another staff member entered. Both subsequently sign-off on the result.

By whatever means results accurately make their way into the EMR, ideally there will be some type of EMR functionality or other process that encourages physicians to retrieve and acknowledge those results. (See Box, below) (3,4). This reduces potential harm to patients due to missed abnormal results, especially in those cases where tests take several weeks or months to complete.

Abnormal Send-out Lab ResultsActions to Reduce the Incidence of Failure-to-Notify Patients

  • Inform patients that no news about a lab test result is not good news.
  • Conduct regular audits by doctoral-level lab staff to determine if abnormal results were acknowledged by a physician, with follow-up when there is no acknowledgement.
  • Create automated/semi-automated reports and mail them to patients.
  • Send all results to physicians’ electronic inboxes.
  • Flag abnormal results in physicians’ electronic inboxes.
  • Require care providers to electronically acknowledge viewing.
  • Allow patients access to their EMRs.
  • Autopage abnormal results to physicians’ wide-screen pagers or to on-call physicians.

Adapted from references 2 and 3.

Quality metrics for send-out testing that monitor the nine areas mentioned previously also can decrease the likelihood of patient harm. For example, labs that use phone call handling software can monitor the abandoned call rate and the time-to-first answer. In addition, labs can set-up send-out test tallies to review the highest volume tests and reference labs. These data help inform decisions about which tests to bring in-house and which reference labs your lab should establish a computer interface with. Manual tallies that track how quickly manual results are entered in the EMR, as well as the rate of corrected reports due to transcription errors also are helpful. Other metrics include: tracking how long it takes for providers to acknowledge results; systematically reviewing test utilization management to ensure management decisions are consistent; and recording the number of undefined send-out tests to look for opportunities to decrease costs.

The Goal: Improved Patient Safety

Implementing send-out improvements will improve the quality of this category of tests, which will directly impact patient care and enhance lab efficiency.

Author: Jane A. Dickerson, PhD, Bonnie Cole, MD, Michael Astion, MD, PhD  // Date: APR.1.2012

Source: AACC’s Clinical Laboratory News

References

  1. Malone B. The Future of Lab Utilization Management. Clinical Laboratory News 2012;38(1):1. Available online.
  2. Astion ML. Failure to report lab test results to outpatients. Clinical Laboratory News 2009;35(10):18.
  3. Miller, C. Making sense of genetic tests. Clinical Laboratory News 2012;38(2):15. Available online.
  4. Casalino LP, Dunham D, Chin MH, et al. Frequency of failure to inform patients of clinically significant outpatient test results. Arch Intern Med 2009;169:1123–1129.

Forthcoming Congresses – February 2016

0

Calendar of IFCC Congresses/Conferences and Regional Federation’s Congresses

May 20 – 21, 2016 IFCC Roche Conf Biomarkers in AD Mexico City Mexico City
Nov 26 – 29, 2016 14th Asia-Pacific Federation for Clinical Biochemistry and Laboratory Medicine Congress Taipei, TW
Jun 11 – 15, 2017 IFCC-EFLM EuroMedLab 2017 Athens, GR
Sep 17 – 22, 2017 XXIII COLABLIOCLI Congress 2017 and XI Uruguayan Congress of Clinical Biochemistry Punta del Este, UY
Oct 22 – 25, 2017 XXIII IFCC WorldLab 2017 Durban, ZA
May 18 – 23, 2019 IFCC-EFLM EuroMedLab 2019 Barcelona, ES
May 24 – 28, 2020 XXIV IFCC WorldLab 2020 Seoul Seoul, KR

 

Calendar of events with IFCC auspices

Feb 27,  2016 11° Annual Meeting of Quality Assurance and Topics on Nutriogenetics, Nutriogenomics and Immunology Upgrades San Luis Potosi, MX
Mar 03 – 05, 2016 XII Congress of the Catalan Association for Clinical Laboratory Sciences Sitges, ES
Mar 03 – 05, 2016 College of Chemical Pathologists of Sri Lanka 1st Annual Academic Sessions 2016 (CCPSL AAS 2016) Colombo, LK
 Mar 03 – 03, 2016 Seminar on “Beyond Accreditation: Harmonization & Standardization”  Mumbay, IN
Mar 09 – 11, 2016 IX National Congress of Clinical Pathology, CONAPAC 2016 Havana, CU
Mar 09 – 11, 2016  Flow Cytometry Course Cordoba, AR
Mar 24 – 25, 2016 5th International Conference on Vitamin D Deficiency and its Clinical Implications Abu Dhabi, UAE
 Apr 19 – 22, 2016  The 9th International and 14th National Congress on Quality Improvement in Clinical Laboratories  Teheran, IR
Apr 20 – 22, 2016 10th International Conference of Clinical Laboratory Automation(Cherry Blossom Symposium 2016) Seoul, KR
May 12 – 14, 2016 XIII Baltic Congress of Laboratory Medicine Tartu, EE
May 12 – 13, 2016 XIV Meeting of the SEQC Scientific Committee Madrid,ES
May 18 – 21, 2016 1st Conference of Romanian Association of Laboratory Medicine(RALM) Cluj Napoca, RO
May 18 – 20, 2016  Congreso Nacional de Residentes Bioquimicos Buenos Aires, AR
May 25 – 27, 2016 XX Congress of Medical Biochemistry and Laboratory Medicine Belgrade, SRB
May 26 – 26, 2016 12th EFLM Symposium for Balkan Region Belgrade, SRB
Sep 21 – 24, 2016 4th Joint EFLM-UEMS Congress “Laboratory Medicine at the Clinical Interface” Warsaw, PL
 Oct 20 – 22, 2016 Joint Meeting of the “3rd Congress on Controversies in Thrombosis & Hemostasis” and the “8th Russian Conference on Clinical Hemostasiology and Hermorheology” Moscow, RU
Oct 27, 2017 International Conference on Laboratory Medicine “Towards performance specifications for the extra-analytical phases of laboratory testing”  Padova, IT
Oct 20 – 22, 2017 XIV International Congress of Pediatric Laboratory Medicine Durban, ZA

CDC braces for wave of Zika cases

0

CDC director Tom Frieden talks to TIME about what we’re learning in the battle against the Zika virus

As the Zika virus continues to spread through the Americas, health officials in the U.S. are hurrying to learn more about the virus and prepare for cases. Currently, the emergency operations center at the U.S. Centers for Disease Control and Prevention (CDC) is on its highest-level alert for the Zika response — only the fourth time in its history. We spoke to CDC director Dr. Tom Frieden about the ongoing outbreak and what we are learning along the way.

Researchers are working on the connection between Zika and microcephaly. Has a lot been learned in the last month?

Absolutely. Every day we are learning more about this virus and how it is currently behaving. I think we can say that the link between Zika and Guillain-Barré looks strong and would not be at all surprising. We’ve seen similar post-infection complications after many different infections, including some that are quite similar to Zika. The link to microcephaly is also getting stronger. It’s not definitive proof yet. It will take more time, including understanding what happens when Colombia and other countries that have large numbers of infections progress so that the women who were infected in the first trimester deliver. We have another investigation team in Colombia.

We currently have about 500 people working on this response at the CDC. This is a big challenge; it is extraordinarily unusual to identify a new cause of a birth defect, and as far as we know, it’s unprecedented to [find] a mosquito-borne cause of a birth defect. So people are concerned, and we understand that. That’s why we are working hard to get as much information as accurately and quickly as possible. We are also now certain that sexual transmission is possible, and this is why we advise men who have sex with women who are pregnant, if they might have a Zika infection because of their travel or residence, to use a condom.

Source: Time

Screening for acute HIV infection raises diagnostic yield

0

Screening a high-prevalence population for acute HIV infection using an antigen/antibody combination assay instead of rapid HIV testing improved the diagnostic yield by 10%, according to a report published online Feb. 16 in JAMA.

Identifying HIV infection during the acute phase is important because that is the most highly infectious stage of the disease. HIV RNA testing using a pooled protocol is effective at this stage but hasn’t been widely adopted “because only 1 RNA assay is U.S. Food and Drug Administration–approved [for this indication], the pooling protocol is logistically complex and time intensive, and it may not be cost-effective,” said Dr. Philip J. Peters of the division of HIV/AIDS prevention, Centers for Disease Control and Prevention, Atlanta, and his associates.

In contrast, combination assays that detect both the p24 antigen and anti-HIV antibodies are faster, are probably cost effective, and are currently recommended by the CDC and the Association of Public Health Laboratories to screen for acute infection. Nevertheless, these assays are not as sensitive as pooled HIV RNA testing, and their accuracy has not been fully established. Dr. Peters and his associates compared the performance of the HIV antigen/antibody combination assay against that of pooled HIV RNA testing (the reference standard) in a high-prevalence population: 86,836 patients treated at 12 centers in San Francisco, New York City, and North Carolina during a 2-year period, including STD clinics and community-based programs.

Just over half of the study population were men who had sex with men, and the median age was 29 years.

The antigen/antibody assay detected 134 of 168 acute infections that had been missed by the rapid HIV test, but it also produced false-positive results for 93 patients. The assay thus had a sensitivity of 79.8%, a specificity of 99.9%, and a positive predictive value of 59%. Relative to rapid HIV testing, the antigen/antibody assay increased the diagnostic yield by 10.4%, Dr. Peters and his associates reported (JAMA. 2016 Feb 16;315[7]:682-690. doi: 10.1001/jama.2016.0286).

As expected, pooled HIV RNA testing performed even better, detecting 164 of the 168 acute infections for a sensitivity of 97.6%, a specificity of 100%, and a positive predictive value of 96.5%. Relative to rapid HIV testing, pooled HIV RNA testing increased the diagnostic yield by 12.4%.

Pooled HIV RNA testing, however, is estimated to cost approximately $160.07 per test, while antigen/antibody combination assays cost only $4.23 each. In addition, antigen/antibody testing requires only 30 minutes (if results are negative) to 60 minutes (if results are positive), while pooled HIV RNA testing requires 6 hours, and the pooling process requires an additional 4-7 days, the investigators noted.

This study was supported by the Centers for Disease Control and Prevention, the San Francisco Department of Public Health, the New York City Department of Health and Mental Hygiene, and the University of North Carolina at Chapel Hill. Dr. Peters and his associates reported having no relevant financial disclosures.

Source: Family Practice News

Monitoring Point-of-Care Testing Compliance

0

Today, point-of-care coordinators use a variety of processes to maintain control over multiple devices and monitor regulatory compliance of many operators at locations across healthcare enterprises.

Early POCT methods were mostly manual, with minimal or no quality control and limited data management capabilities. Modern POCT devices are greatly improved, but capturing the data required to document compliance remains a labor-intensive process. In addition, despite rapid growth of POCT methods and use, POCT operators often have limited understanding of the regulatory and accreditation requirements for licensure, training, procedures, and documentation. Consequently, nurses and other providers often see POCT coordinators as police who indiscriminately enforce regulations that seem onerous at best, and detrimental to patient care at worst. This is an unfamiliar and uncomfortable role for laboratory medicine professionals, who are highly trained to promote quality patient care and efficient use of resources. In this brief review, we will discuss some POCT-related regulatory issues in the hospital environment, and potential ways to satisfy those requirements.

The Spectrum of Point-of-Care Testing

There is some ambiguity with the term POCT and its predecessors, bedside testing, near patient testing, and less frequently, ancillary or decentralized testing. All were derived from the proximity of the laboratory test to the patient or central laboratory, but this distinction is relative and imprecise. POCT is often regarded as tests performed outside of a central laboratory, but this definition also is unsatisfactory, as limited-service satellite laboratories staffed by laboratory personnel are considered clinical laboratories (or sometimes blood gas laboratories) but not POCT services, at least for accreditation and regulatory standards. Their location near the patient does not influence the accreditation standards they have to meet.

For regulatory purposes, satellite laboratories are generally considered extensions of the central laboratory service, rather than a separate classification such as POCT. Therefore, the location at which a laboratory test is performed does not classify it one way or the other. It is even conceivable, given the following considerations, that POCT could be performed in the central laboratory.

It is tempting to define POCT as laboratory tests that are CLIA-waived, but this distinction is too narrow. Although many tests performed at the point-of-care are CLIA-waived, many nonwaived platforms are specifically designed for use outside a central laboratory. Abbott’s iSTAT, for example, is used with waived and nonwaived test cartridges, but it is not an instrument that typically would be deployed in a laboratory: its intended use is in patient care areas and its portability is suited for that purpose. There are numerous examples of small instruments designed for portability, and many of them are nonwaived devices. These instruments include small blood gas analyzers, several coagulation and hematology testing platforms, and even general chemistry analyzers such as the Piccolo from Abaxis. So POCT does not specifically refer to CLIA-waived laboratory tests, but instead includes a wide variety of nonwaived tests and devices.

Yet another criteria for defining POCT—and possibly the most satisfactory definition from a regulatory perspective—is who performs the test. If laboratory personnel perform a test, then this test typically falls under the laboratory license, certificate, and accreditation, even if it is performed outside of the physical laboratory space, and regardless of whether the test is waived or nonwaived. On the other hand, waived or nonwaived laboratory tests performed by non-laboratory personnel are nearly always subject to a different set of regulatory and accreditation standards, and these can neatly be grouped under the POCT umbrella.

Therefore, POCT somewhat misleadingly suggests a location where the test is performed, but in fact the regulatory standards are primarily determined by who performs the test. In practice it is highly unusual for non-laboratory personnel to perform any tests within the clinical laboratory, but the converse is relatively common. Clinical laboratory personnel often perform laboratory tests outside the central laboratory (e.g., streptococcus A screening in an emergency department, sweat tests in a pediatric ward, urine drug screens in the human resources department, international normalized ratio in a coagulation clinic). Tests like these fall under the clinical laboratory CLIA certificate and, in states that issue them, the laboratory license. Consequently, POCT typically refers to waived or nonwaived laboratory tests performed at remote locations by non-laboratory personnel.

Maintaining Compliance

Regulatory oversight of POCT differs in several respects from that of other clinical laboratory services, and maintaining compliance with the agencies involved in POCT oversight can be a daunting task. Federal regulation of POCT is minimal, and for most tests in this category the only requirement is that the test be performed according to the manufacturer’s instructions. However, states and accrediting agencies often impose additional requirements on POCT that healthcare facilities need to deal with. These requirements focus primarily on operator competency and verification that the procedures specified by the POCT manufacturer are strictly followed.

States vary in the degrees to which they regulate POCT. For example, Florida—a state that licenses clinical laboratories and the technical personnel employed by them—has minimal regulations for waived tests but very strict requirements for nonwaived laboratory tests performed by personnel who do not have a clinical laboratory technician or technologist license. The Florida Administrative Code (FAC) specifies the qualifications necessary for non-laboratory personnel performing nonwaived tests, a category of laboratory testing the FAC refers to as alternate site testing. To perform alternate site laboratory tests in Florida, the employee must be a licensed healthcare professional under any one of several categories, including physician, dentist, physician’s assistant, nurse (RN, LPN, or ARNP), respiratory therapist, etc. Thus, the principal regulatory requirement focuses on personnel qualifications, not the proximity of the test to the patient or the laboratory.

But the FAC imposes additional requirements that disqualify the vast majority of nonwaived laboratory tests from being performed by non-laboratory personnel regardless of their qualifications. The tests must use whole blood and must not require specimen manipulations, such as manual dilution or centrifugation. In addition, the instrument must be self-calibrating and equipped with failsafe mechanisms that prevent patient results from being reported in the case of calibration or quality control failure. Therefore, even though nonwaived testing is allowed at alternate sites, Florida law strictly limits the variety of nonwaived tests that can be deployed in a POCT environment.

As with clinical laboratories, compliance with state and federal requirements for POCT are ordinarily met through accreditation by organizations with deemed authority, such as the College of American Pathologists (CAP) Laboratory Accreditation Program or the Joint Commission. Although the accreditation standards recognized by these organizations meet CMS and state requirements, they are not identical in all respects. Therefore, some hospitals may choose to have their clinical laboratories accredited by one organization, and their POCT program accredited by another.

CMS does not require that all laboratory services are accredited by the same organization as long as each CLIA certificate is covered by a deemed authority, and in some respects the accreditation standards of one organization may be easier to satisfy in a particular setting than those of another. As a result, it is not unusual for a hospital to have its clinical laboratory services accredited by, for example, the CAP, while their POCT program is accredited by Joint Commission.

In general, POCT regulatory requirements focus on two areas: training and competency of the personnel doing the testing; and verification of strict adherence to the manufacturer-specified procedure for each test. The latter focus is particularly important because waived or moderately complex laboratory methods, both of which can be performed by non-laboratory personnel under certain circumstances, become highly complex if used in a manner that deviates from the FDA-approved manufacturer’s protocol. Highly complex laboratory tests, by federal law, can be performed only by personnel meeting the qualifications specified in CLIA Subpart M, and additional educational and licensure requirements may be imposed by some states.

Since high complexity essentially eliminates a laboratory test from consideration for POCT, it is critically important that supervision of POCT in a healthcare institution includes verification that testing procedures do not deviate from the manufacturer’s instructions.

POCT device manufacturers have responded to the challenge of monitoring the use of these instruments by designing features such as access control and electronic communication with a laboratory information system (LIS) or other network system. This communication allows the LIS to download quality control and patient results. However, POCT coordinators still need a dedicated resource for managing their POCT programs. The use of connectivity via a data management system can greatly improve efficiency when managing different aspects of compliance. With the adoption of POCT1-A communication protocols, data systems evolved from vendor-specific to vendor-neutral platforms. Although there may be some functional limitations for specific devices, vendor-neutral platforms offer POCT coordinators the flexibility to connect devices from multiple manufacturers, providing better support for the compliance elements of the program.

POCT Compliance Essentials

Device Management

Device management is key as central laboratory tests continue their migration to POCT platforms. POCT devices can be set up and configured remotely from a single central location with software updates manually or automatically downloaded to the devices. In addition, the data management system serves as a repository for testing locations, instrument serial numbers, and instrument service history and software versions. The data management system also tracks the status of the connected devices so that communication and connectivity issues can be addressed promptly.

Quality Control

Quality control (QC) is required for all waived and nonwaived tests. The QC limits and frequency intervals can be configured at the device or managed remotely with the data management system. This prevents an operator from using the instrument once the QC interval has been exceeded or the result is not within acceptable limits. QC results for each device and operator also can be reviewed and evaluated by laboratory personnel, a requirement for most laboratory accreditation programs. The data system also captures and stores comments describing corrective action for unacceptable QC results. Some data management systems allow for QC import into other software programs for peer comparison as well as capture of manual QC results for tests such as fecal occult blood.

Operator Management

Operator management refers to controlling access to the POCT devices and tracking the authorization of all operators, with alerts when certifications have expired (see competency management, below). Access to a POCT device can be authorized via operator list downloads when the instrument queries the data management system to determine whether an operator is currently certified. If an operator with expired certification attempts to use the POCT device, he or she will be locked out, preventing use of the device. Some data management systems notify operators when they are approaching the expiration date of their access to a device.

Competency Management

Data managements systems enable POCT coordinators to track the dates on which operator competencies were completed for original certification, recertification, QC performance, and patient resulting and reporting. This increases efficiency, especially when paired with other learning management systems (LMS) such as Healthstream. Although both systems currently require some manual input and maintenance of data, the ability to interface the data management system to the LMS may be on the technology horizon. Currently, data management systems include the ability to generate reports that show initial training, 6 month, and yearly competencies, all of which are required elements under waived and nonwaived testing standards. Many systems also offer automatic recertification of operators.

Data Monitoring

In order to comply with accreditation standards, POCT coordinators monitor data from activities such as correlation testing, linearity and analytical measurement range verification, proficiency testing, calibration, and patient identification. Data systems can automatically capture this data and document it for review. This data also can be entered by hand from manual tests (e.g. fecal occult blood, dipstick urine, pregnancy tests), although compliance with these standards for POCT that does not involve interfaced instruments is difficult to verify. While instrument platforms exist for each of the previously mentioned tests, they are more commonly performed manually.

Inventory Management

Data systems also are essential for managing consumables for POCT devices. These tools include reports showing usage and device workload that laboratorians can use to establish the frequency and size of supply orders, potentially reducing costs by eliminating waste of expired reagents and controls. Reagent and control lot numbers, and established QC ranges, can be entered into the data system and uploaded to the POCT devices. In addition, alarms can be set to alert the POCT coordinator when new lots are in use that may require validation. Many POCT devices include barcode scanning capabilities that allow reagents and controls to be scanned by operators to verify the current lot number and prevent use of expired or unvalidated reagents. The current lot numbers may reside in the data management system.

Monitoring Device Status

Remote monitoring enables a POCT coordinator to determine the status of any connected devices. Inoperable devices can be immediately identified and either removed from service or repaired. For example, many POCT devices have a data buffer that, when exceeded, prevents the device from being used until the buffer is cleared. This type of error can be detected by the data management system and dealt with promptly by testing or supervisory personnel. By configuring alerts, the data system also may give coordinators the ability to investigate and resolve issues before they become critical.

Remote Access

Remote access enables POCT data management from a computer anywhere within or outside of the organization, based on how the system is configured. With the adoption of mobile devices such as tablets and smartphones, web-based data management applications can be accessed from virtually anywhere to exchange information and manage systems, including in some applications the ability to send remote commands to the devices.

Conclusion

Regulatory oversight of POCT focuses primarily on ensuring the proper training and competency of personnel performing these tests, and verifying that the tests are being conducted according to manufacturer instructions. Supervision of a POCT program requires attention to these and other aspects of laboratory tests performed by non-laboratory personnel. Connectivity via data management platforms has provided an elegant solution to the challenge of managing these regulatory and compliance aspects of a large POCT program.

With widespread implementation of wireless networks, and the built-in WiFi capabilities of most modern analytical devices, data management systems for POCT will eventually support a seamless network of POCT devices deployed throughout a healthcare facility, perhaps fundamentally changing our notion of what is meant by the clinical laboratory.

As more and more laboratory services move outside our traditional workspace, laboratory medicine professionals face expanding responsibilities to ensure the quality and integrity of laboratory services throughout the entire facility. We have outlined just a few of the regulatory and accreditation issues that accompany the supervision of a POCT program.

Author: Olga Camacho-Ryan, MBA, MT(ASCP), and Roger L. Bertholf, PhD, DABCC  //Date: FEB.1.2016  // Source: Clinical Laboratory News

References

  • DuBois J. Point of care testing connectivity: Past, present and future. Point of Care: The Journal of Near-Patient Testing & Technology 2001;9:196–8.
  • Granz J, Koerte P, and Stein D. Managing the challenges in point-of-care testing – An ecosystem approach. Point of Care: The Journal of Near-Patient Testing & Technology 2013;12:76–9.
  • Travanty A. Connectivity aids compliance. Advance for Administrators of the Laboratory 2011;20(11):18–9.
  • Olga Camacho-Ryan is quality manager and supervisor of point-of-care testing at University of Florida Health Jacksonville Hospital.+Email: Olga.Ryan@jax.ufl.edu
  • Roger Bertholf is professor of pathology and laboratory medicine at University of Florida College of Medicine, and director of clinical chemistry, toxicology, and point-of-care testing at University of Florida Health Jacksonville. +Email: Roger.Bertholf@jax.ufl.edu.

Source: AACC

Increased mortality tied to higher genetic risk for diabetes, study finds

0

A higher genetic risk for type 2 diabetes was linked to a greater risk for all-cause mortality, independent of body mass index (BMI), lifestyle and metabolic risk factors, and whether a person had diabetes at baseline, researchers reported.

For each type 2 diabetes (T2D) risk allele a person carried, their mortality risk increased by 4% over the 17-year study period, according to James Meigs, MD, of Harvard Medical School in Boston, and colleagues in Diabetes Care.

However, based on an analysis by ethnicity, the association held true for whites and blacks, but not Mexican Americans, the authors noted.

“Epidemiologic studies have shown that T2D is associated with increased all-cause mortality risk. Given that T2D is partly genetically determined, genetic factors that increase T2D susceptibility may also raise mortality risk through T2D or its related complications,” they wrote. “Here, we tested the hypothesis that carrying a higher aggregate genetic burden of T2D risk … predicted all-cause mortality.”

The study included 6,501 participants from the Third National Health and Nutrition Examination Survey. They were 81.1% white, 12.7% black, and 6.2% Mexican American. The prevalence of type 2 diabetes was similar across the ethnic groups, ranging from 8%-11%. Over the 17-year study period, 1,556 participants, about 19%, died.

The study participants were genotyped, and genetic data was analyzed with a focus on 38 single nucleotide polymorphisms associated with type 2 diabetes risk. The investigators looked for an association between aggregate genetic risk for type 2 diabetes and all-cause mortality. They also sought to determine whether this association was modified by ethnicity or BMI.

After adjusting for age, sex, BMI, smoking, alcohol use, hypertension, and other risk factors, the investigators found that, for the group as a whole, mortality risk increased slightly for every type 2 diabetes risk allele a person had (odds ratio 1.04, 95% CI 1.00-1.08, P=0.05).

Further adjustment for type 2 diabetes at baseline yielded nearly identical results, with only a slight difference in the P-value (OR 1.04, 95% CI 1.00-1.08, P=0.04).

When they authors evaluated the results by ethnicity, the association remained significant for whites and blacks but not for Mexican Americans (OR 0.95, 95% CI 0.89-1.01, P=0.10).

In an analysis by ethnicity and BMI category (<25 kg/m2, 25–30 kg/m2, and ≥30 kg/m2), the results were only statistically significant for obese whites (OR 1.07, 95% CI 1.02-1.12).

In fact, Meigs and colleagues found that a higher genetic type 2 diabetes risk was negatively associated with mortality risk in Mexican Americans of normal weight (BMI <25 kg/m2, OR 0.91, 95% CI 0.82-1.00).

“The trend toward a mortality advantage among Mexican American participants of normal weight carrying more T2D-related risk alleles warrants replication in larger population-based cohorts consisting of persons of Mexican ancestry with thorough longitudinal follow-up for clinical end points,” they wrote. “Future genetic-environment interaction studies may clarify the mechanisms underlying the heterogeneous effects of T2D-related genetic variants on mortality by ethnicity and BMI, and inform lifestyle intervention strategies directed at those with stronger genetic susceptibility to T2D-related mortality,” they said.

Asked if the genetic risk impacted mortality even in people who did not develop type 2 diabetes, co-author Aaron Leong, MD, also from Harvard, told MedPage Today via email that “we unfortunately couldn’t determine whether the excess mortality risk associated with a higher T2D genetic predisposition occurred only among those who did develop T2D within their lifetime, as we do not have data on new cases of T2D during follow-up.”

“So it is possible that the higher genetic risk for T2D impacts mortality risk even if a person does not develop diabetes; however, we could not test this specific hypothesis,” Leong said.

“In sum, in the U.S., carriers of more T2D risk-raising alleles have a higher mortality risk than non-carriers, suggesting that having a higher genetic burden for the development of T2D may increase the mortality risk. The underlying genetic basis of mortality likely involves complex interactions with non-genetic factors related to ethnicity, T2D, or body weight,” the authors stated. “In the midst of a T2D and obesity co-epidemic from an increasingly obesogenic environment, maintaining a normal body weight may be especially important for lowering mortality risk in individuals with a high genetic predisposition to T2D.”

The study had some limitations, namely the researchers were unable to distinguish type 1 diabetes from type 2 diabetes, and the study was underpowered to demonstrate an association between genetic risk score and specific causes of death.

Meigs disclosed support from the NIH and the National Institute of Diabetes and Digestive and Kidney Diseases. Leong disclosed support from the Canadian Diabetes Association.

Meigs and co-authors disclosed no relevant relationships with industry.

Source: MedPageToday

Choosing and Retiring Quality Indicators

0

Every year, laboratorians face the task of reviewing and choosing quality indicators (QIs) for their labs. Given time pressures, it might be tempting to simply continue monitoring the same QIs for another year. However, thoughtful review and decisions about QIs build the framework for strategic quality initiatives for the upcoming year, and are well worth the effort.

QIs—tools that support objective monitoring of errors—are an integral component of a laboratory’s quality management program. Effective QIs foster continuous improvement by helping labs identify potential quality concerns early-on. Robust QIs also help stakeholders make informed decisions about how to prevent or minimize future errors. In the United States, laboratories are required to assess quality performance throughout the total testing process (preanalytic, analytic, postanalytic), but choosing QIs is at the discretion of laboratorians. Laboratorians need to give thoughtful consideration to selecting and developing meaningful QIs that promote continuous quality improvement and safe patient care.

Choosing Quality Indicators

Monitoring errors to assess quality in all laboratory processes is ideal, though impractical. Instead, labs will do well to choose a few QIs that meet the goals and challenges of each laboratory’s unique setting.

Important questions to ask when selecting candidate QIs are: what specific testing processes should be monitored, and why? Lab QIs commonly focus on minimizing the frequency of errors in processes that are high-risk for patient harm, have known vulnerability or ongoing problems that may result in high-frequency of error, or involve high-cost processes. Reviewing evidence-based literature and engaging experts helps identify candidate QIs.

Another good approach to prioritizing QIs specific to a laboratory’s unique setting is to conduct a risk assessment that identifies sources of error within testing processes. Such an assessment typically involves evaluating the probability of error together with potential negative impact on patient care. This helps identify high-risk testing processes that warrant monitoring.

Next, leaders should build consensus and buy-in among stakeholders on candidate QIs. Successful QIs that support continuous improvement endeavors often involve several inter-disciplinary stakeholders. Selecting candidate QIs with quality goals that match overarching institutional strategic goals (e.g. safety), best practice recommendations, or regulatory requirements motivates stakeholders and helps establish consensus.

After selecting candidate QIs, laboratorians should consider several factors to ensure that any associated data collection is feasible and information gathered is meaningful:

  • Understanding the process of data collection while considering available resources provides insight into whether analyzing a particular QI metric is practical. Describe what data needs to be captured, data source (e.g. electronic capture, manual audit), frequency of data collection, and individuals responsible for collecting data.
  • Defining the scope and limitations helps identify key stakeholders and highlights variables that might dilute the utility of information. Describe the range (e.g. test, location, patient population) for data collection and any exclusions or limitations in capturing select data.
  • Setting target thresholds highlights quality goals and underscores what the lab is trying to accomplish. Designate well-defined target thresholds (i.e. acceptable limits) that meet institution performance goals and align with benchmark data, if available.
  • Identifying optimal presentation of data ensures that this information will be understood clearly. Illustrate QI information in a clear format (e.g. graphics, charts, tables) that incorporates target thresholds and enables performance trending over time.
  • Defining an action plan supports objective decision-making and communicates expectations for each stakeholder. Importantly, describe the frequency for QI evaluation and steps that should be taken if the lab exceeds target thresholds, when/if target thresholds should be modified, when/if to reduce or stop monitoring, and individuals responsible for each action.

Collectively, these factors help laboratorians choose effective QIs that guide continuous improvement on the way to achieving quality goals.

Retiring Quality Indicators

Using QIs to assess quality in dynamic laboratory practices requires periodic review of whether a QI should still be monitored as is, modified, or retired.

Frequently, laboratorians collect data and monitor the same QIs year-after-year that demonstrate acceptable results suggestive of highly stable quality practices. Settling for QIs that provide minimal actionable information wastes time, energy, and money. However, taking a step back to reevaluate whether a monitored testing process remains high-risk helps clarify whether a QI should be kept or retired.

If a lab decides to continue monitoring the QI, re-defining quality goals with more stringent target thresholds and an accompanying action plan still supports continuous quality improvement. Alternatively, reducing how frequently the lab monitors a QI with known stable results might be sufficient to detect any decline in quality overtime. Labs should expect to retire QIs periodically so that they adapt to changing laboratory quality goals and practices. This enables them to focus on new QIs that support continuous quality improvement over time.

Author: Nichole Korpi-Steiner, PhD, DABCC, FACB // Date: FEB.1.2016 //

Source: AACC’s Clinical Laboratory News

Nichole Korpi-Steiner, PhD, DABCC, FACB, is assistant professor of pathology and laboratory medicine, director of point-of-care testing, and associate director of the core laboratory at McLendon Clinical Laboratories at the University of North Carolina at Chapel Hill. +Email: Nichole.Korpi-Steiner@unchealth.unc.edu

Agenda

         

Radio El Microscopio

Ze Xiong

Últimas notas publicadas