Our DHORT test prep material gives all of you that you have to pass DHORT exam. Our DHORT DHORT dumps contains questions that are precisely same real DHORT test questions. We at killexams guarantees your accomplishment in DHORT test with our DHORT braindumps.
Discover Health Occupations Readiness Test
https://killexams.com/pass4sure/exam-detail/DHORT Question: 97
What property of a metal refers to its ability to be hammered into sheets?
B. Thermal conductivity
C. Electrical conductivity
E. Density Answer: D
Malleability refers to a metal's ability to be hammered into sheets. Question: 98
Which of the following elements is most easily oxidized?
E. Sulfur Answer: C
Easily oxidized elements readily release electrons. The loss of an electron allows
these elements to form a stable valence electron configuration. In the context of
this question, lithium is the most willing to release an electron, so it is the most
easily oxidized. Question: 99
The hamstrings are responsible for flexing which joint?
D. Elbow Answer: A
The hamstrings are responsible for flexing the knee joint. The hamstrings consist
of biceps femoris, semimembranosus and semitendinosus. They also assist in
knee rotation. Question: 100
What type of bonding does a molecule of NaCl display?
B. Polar covalent
C. Nonpolar covalent
E. Covalent metallic Answer: A
NaCl is composed of two ions, Na+ and Cl-. Therefore, the molecule displays
ionic bonding. Question: 101
Which of the following compounds is classified as a metallic oxide?
E. SO2 Answer: B
Metallic oxides consist of an oxygen atom bound to a metal. The only metal
present in this question is sodium, so Na2O must be a metallic oxide. Question: 102
A saturated solution of NaCl is heated until more solute can be dissolved. How is
this solution best described?
E. Hydrophobic Answer: C
When a saturated solution is heated so more solute can be dissolved, the solution
is described as supersaturated. Supersaturated solutions are typically unstable, and
the solute can crash out of solution if a seed crystal is provided. Question: 103
A 40.0 gram demo of a radioactive element decays to 5.0 grams in 15 hours.
What is the half-life of this element?
A. 3 hours
B. 2 hours
C. 6 hours
D. 7.5 hours
E. 5 hours Answer: E
In 15 hours, this substance has decayed to 1/8 of its original mass. In other words,
the substance has progressed through three half-lives (1/2 x 1/2 x 1/2 = 1/8).
Thus, a single-half life for this substance is 5 hours. Question: 104
Which of the following compounds contains a double bond?
E. C4H10 Answer: B
Hydrocarbons with the formula CnH2n contain double bonds. The only
compound with this formula, C2H4, must contain a double bond. Question: 105
What is the duodenum responsible for?
A. Breaking down food in the small intestines
B. Cleans the blood
C. Creates bile
D. Absorbs oxygen Answer: A
The duodenum is responsible for breaking down food in the small intestines. Question: 106
80.0 grams of NaOH is dissolved in 3.0 moles of H20. What is the mole fraction
of NaOH in this solution?
E. .79 Answer: C
The mole fraction for a compound indicates (moles of given compund) / (total
moles of system). In this question, there are two moles of NaOH and five total
moles in the system. Thus, the mole fraction is expressed as 2/5, or .40.
For More exams visit https://killexams.com/vendors-exam-list
Kill your test at First Attempt....Guaranteed!
Medical Occupations answers - BingNews
Search resultsMedical Occupations answers - BingNews
https://killexams.com/exam_list/MedicalThe 100 highest-paying jobs in America
Fewer companies plan to supply raises in 2023 compared to the previous year as the scramble to retain employees eases. Of all the companies surveyed by Payscale Inc. for its 2023 Compensation Best Practices, 80% report they would offer salary increases—but that's compared to 92% in 2022. Another 15% said they were unsure.
Pay hikes that were higher than usual became common during the coronavirus pandemic as companies sought to keep workers and replace those who had left.
As of April 2023, unemployment in the United States was at 3.4%.
What are the top-paying jobs in the country? Stacker ranked the 100 highest-paying jobs in America using May 2022 data from the Bureau of Labor Statistics, which was updated on April 25, 2023.
Engineers in a variety of fields make this list, as do educators, particularly those working in postsecondary settings. As expected, medical professionals post a strong showing, as well as managers. There are surprises, too. For example, would you have guessed that an art director earns, on average, more than a financial analyst?
Jobs are ranked according to their median annual wage; the median hourly wage and total employment nationwide are also included. Positions that report only hourly wages due to the nature of the work were excluded from this analysis. Additionally, any jobs that listed "all other" in the occupation name also were excluded as these are groupings of jobs, and the data may not accurately reflect every one.
Stacker breaks down what the jobs entail, what skills are required, and how interested people can get a start in the field. Click through to find out which professions offer the best-paying positions.
Gorodenkoff // ShutterstockSun, 04 Jun 2023 06:30:00 -0500entext/htmlhttps://www.stltoday.com/news/the-100-highest-paying-jobs-in-america/article_21514fbb-66df-5b01-b761-a521b85f19b1.htmlAnswering the 'Why This Medical School?' QuestionAccess Denied
You don't have permission to access "http://www.usnews.com/education/blogs/medical-school-admissions-doctor/articles/how-to-answer-the-why-this-medical-school-essay-question" on this server.
Tue, 29 Sep 2020 01:57:00 -0500text/htmlhttps://www.usnews.com/education/blogs/medical-school-admissions-doctor/articles/how-to-answer-the-why-this-medical-school-essay-questionMedical students aren't showing up to class. What does that mean for future docs?
During my first two years as a medical student, I almost never went to lectures. Neither did my peers. In fact, I estimate that not even a quarter of medical students in my class consistently attended classes in person. One of my professors, Dr. Philip Gruppuso, says in his 40 years of teaching, in-person lecture attendance is the lowest he's seen. Even before the COVID-19 pandemic, first- and second-year medical students regularly skipped lectures. Instead, they opted to watch the recordings at home on their own time. The pandemic accelerated the shift. This absence from the classroom has a lot of people in the medical education system wondering how this will affect future doctors, and has precipitated wide discussion among medical institutions. Medical education is changing rapidly, and the change is being driven by students — so how do schools incorporate the reality of virtual learning while training them adequately for the huge responsibility of patient care?
"Flip" the classroom for the first two years
The first half of medical education (traditionally the first one to two years, which are also sometimes called the preclerkship years) prepares students to succeed during the second half of medical school, clerkships, where students work directly with patient care teams. Preclerkship medical education is where students learn the technical elements of being a doctor before seeing patients. It includes lectures in medical science — anatomy, embryology, physiology, pathology, and pharmacology — and health system science – ethics, professionalism and public health. And it goes beyond lectures. It includes dissecting a human body in anatomy lab, practicing how to interview a patient and conduct a physical test (typically using patient actors) and numerous small group discussion sessions connected to specific lectures.
Virtual learning during these critical first two years for me had some significant downsides. I was unable to ask questions of a prerecorded lecturer. Student-teacher relationships, one of the parts of medical education I was most looking forward to, became much harder to cultivate. It was isolating at times.
Dr. Gruppuso and I started talking and we have a few thoughts on how to change the medical education system to mitigate these downsides while supporting students in a decision they have already made to learn on their own time.
Our proposal is this: employ the "flipped classroom" model extensively for preclerkship medical school lectures. In this model, the in-person lecture all but disappears, and students learn most of the classroom-type material on their own before in-person time — hence the flip. We suggest starting with a series of virtual modules to prepare for case-based small group sessions held in person. Activities such as anatomy lab, patient interviewing and physical test practice and special guest lectures would remain in-person. This, in essence, embraces the virtual lecture trajectory but requires genuine attendance for small group hands-on learning.
A medical student's perspective — Alexander Philips
Let me start by saying, I did enjoy advantages of virtual lectures. Pausing, rewinding, re-watching, and speeding up the talks was a great way to focus on my weak areas and save time, and time was my most valuable resource as a medical student, given the sheer volume of information to be learned. Virtual learning made it much easier for me to incorporate non-lecture resources into my study plan, too, such as flash cards, web tutorials or lectures by third parties.
In the flipped classroom scenario, my typical day might involve a morning of watching short, targeted medical science modules, with pauses in between so I could draw diagrams, study online flash cards, and read and watch other resources. Then, I would have an hour or two of required in-person case-based small group discussion with my professors and classmates where we focus on the clinical applications of that medical science by discussing hypothetical patient cases. Other days would be devoted to anatomy lab, clinical skills practice with standardized patients (patient actors) under the direct supervision of faculty, shadowing in the hospital, and non-structured time for other activities like research, advocacy and community service.
In addition to allowing for discussions and getting to know professors and fellow students, it would supply some regularity to my schedule. In the current system, with the convenience of recorded lectures, I was on my own to keep on track with the material and it was easier to fall behind.
A professor's perspective — Dr. Philip Gruppuso
I have taught medical students for nearly 40 years in many contexts – on hospital rounds, during patient appointments, running small group discussions, and teaching large classes. I have lectured on syllabus that range from biochemical pathways to lifestyle diseases (those connected to things like physical inactivity) to nutrition science and the biology of aging.
The most gratifying part of teaching is passing along the less tangible aspects of being a physician — how to show respect for all patients and be a true caregiver. I do this by telling stories about my clinical experience during lectures and the payoff for me is engagement with students. The pandemic and its attendant shift in how students learned changed all of that.
Fully virtual learning for the first two years of school may have been necessary during the pandemic, but continuing to do this would ill prepare young adults to be physicians.
The intrinsically personal nature of medicine taught in clinical skills curricula or human body dissection cannot be captured in a learning format that is intrinsically impersonal. There's also more to preclerkship education; other facilitators of holistic physician training like research, specialty exploration, and volunteer work, are almost impossible with virtual learning.
Finally, there is a very real threat to the medical education enterprise in altering the role of the physician faculty member. Doctors are unusual among professions in the expectation that they will teach regardless of where and what specialty they practice. Remove the gratification that comes with face-to-face teaching and we risk losing the commitment of faculty, much of which is often done on an entirely voluntary basis.
Medical education at an inflection point — our joint take
In the discussion of what post-pandemic medical education might look like, some have called for the preclerkship years to be entirely virtual. Advancement to clerkships would be determined by competency (ie. have you mastered the coursework) rather than time. But we favor a less extreme incorporation of virtual learning that relies on this flipped classroom.
The Warren Alpert Medical School of Brown University, among other schools, is increasingly implementing this approach. The value of interaction with peers, asking questions, and building relationships with teachers is greatest and most time-efficient when students have a thorough understanding of the fundamental frameworks and key concepts of the underlying science. That framework can often be built more efficiently in a tailored virtual setting where students can truly work on their weaknesses, allowing school faculty to focus on helping students apply that knowledge to caring for patients. Teachers may also complement these discussions by sharing experiences about how they diagnosed and treated specific patients working in organizations and communities in which medical students will serve during their clerkships. Doing away with the larger in-person medical science lectures and focusing on developing or sourcing high-quality virtual content draws on the strengths of virtual learning; diverting saved time and resources towards optimizing regular in-person case-based small group sessions with faculty and other students mitigates the drawbacks of virtual learning.
Medical education is at an inflection point. A traditional vs. flipped preclerkship medical science classroom is just one of several decisions we face when thinking about how to train the next generation of physicians. For example, the following questions are intimately intertwined with the role of virtual learning in medical education, and are simultaneously being debated in schools across the country.
What is the role of medical science coursework in medical education? The USMLE Step 1 Exam is the first licensing test to becoming a physician and primarily tests medical science concepts. The move towards a shortened preclerkship education term will only be accelerated by a latest shift of the test to pass/fail. Encouraging students to begin viewing medicine from a clinical lens earlier in their training is a good, but less time spent building a deep understanding of mechanisms of disease and treatment can undermine the foundation for clinical education.
To what extent can or should preclerkship medical science education integrate outside resources to efficiently teach content? Medical students have already been embracing a shift towards outside resources for years via a self-directed curriculum to either supplement or replace medical school lectures. This has been happening mostly independent of input from faculty or administration.
If the cost of providing lectures decreases in light of reusable or easily updatable virtual content, possibly standardized across schools, the resulting efficiencies could conceivably lower the cost of education. If that can be accomplished, should medical tuition decrease to reflect this? If so, this may mean broader access to medical education, less student loan burden, and fewer barriers to pursuing careers in lower-paying specialties, including primary care. Conversely, the time and faculty intensive nature of more small group sessions may increase cost burden to schools.
Will the benefits of these educational reforms be available to all? For students who enter medical school from less advantaged educational backgrounds, including students with neurodivergence or those from groups underrepresented in medicine (URiM), online coursework may result in poorer educational outcomes. Conversely, neurodivergent learners may benefit from personalized learning modules; URiM students and those that traditionally have less access to faculty may have more face-to-face learning time. As education shifts to a virtual format, it is critical that its effects across the entire student population be evaluated.
These questions are much harder to answer than a question of whether flipped classrooms deserve an increased role in preclerkship medical education. But these choices are not all or nothing. Change should be made with an understanding of the tradeoffs, and with the foresight to mitigate the negative consequences of those changes.
Medical schools need to get preclerkship medical education right. The strong foundation from my (Alexander Philips') first two years of medical school was what helped me diagnose, admit, treat, and discharge my first patient just a few weeks ago as a third year medical student on my first clerkship. We believe the immediate next step for preclerkship medical science medical education is clear. A flipped classroom, and thus an increased role for virtual learning in the preclerkship years of medical school, is a promising model. Can we preserve the broad goals of preclerkship medical education while supporting medical students in a decision they have already made to learn on their own time? We believe the answer is yes.
Alexander P. Philips is a third-year medical student at Brown University and Tweets @AlexPPhilips.Dr. Philip Gruppusois the former Associate Dean for Medical Education and currently teaches at Brown. This piece solely represents the perspective of the two authors, who would like to thankDr. B. Star HamptonandDr. Sarita Warrierwith Brown University, for their input.
Copyright 2023 NPR. To see more, visit https://www.npr.org.
Thu, 01 Jun 2023 01:01:00 -0500entext/htmlhttps://wusfnews.wusf.usf.edu/2023-06-01/medical-students-arent-showing-up-to-class-what-does-that-mean-for-future-docsHow Mental Health Services Can Improve the Patient and Employee Experience
Medical treatments can disrupt a person's emotional and mental well-being. The fear of the unknown, concerns about test results, financial accessibility to care, and medication side effects can amplify an already stressful situation.
The same can be said for health care providers treating patients. From physicians to nurses and office staff, health care workers at any level and in any field of medicine are prone to occupational stress resulting from a patient's own stress, trauma, and loss.
To reach optimal patient outcomes, health care leaders should be committed to both the patient and employee experiences. Addressing the mental toll of health care — and providing supportive resources — can help an organization meet its patient satisfaction goals.
How Stress Impacts Fertility Care
Like most medical conditions, fertility care involves a constant stream of doctor visits, anticipated test results, and fluctuating financial costs. Considered a silent struggle, aspiring parents go about their daily lives on the outside while battling feelings of shame and guilt on the inside. These tangible and intangible factors can lead to depression, anxiety, and stress.
Address the financial burden of care immediately: After a patient is given a treatment plan, an initial thought can be "How am I going to pay for this?" especially if they are non- or underinsured. If you're in an area of medicine that is typically underinsured, form relationships with financial providers who can help your patients manage their medical costs. And immediately connect patients with financial counselors who can help them navigate the costs of their care.
Set up virtual events with mental health experts: Schedule regular virtual events with mental health professionals who specialize in your area of medicine and can address patients' top emotional and mental concerns. This added service demonstrates your commitment to the patient's experience.
Improve patient communication: Patients want their information quickly, and having answers can help relieve them of unnecessary anxiety. Consider launching an exclusive digital platform, like an app, where patients can access test results, modify or be reminded of appointments, connect with their care teams, and access health and wellness resources specific to their treatment. Efficient communication leaves little room for misinterpretation and can reduce patient stress.
Appoint someone in charge of mental health: Consider onboarding a mental health professional who can provide support to both patients and employees. In 2022, Inception added a new role of Chief Compassion Officer, hiring a globally recognized leader in mental health to support both our patients and our family members.
Offer ancillary services: Acupuncture and yoga are proven tools to help patients destress. Our fertility practices offer these ancillary services to aspiring parents to help them manage the stress, anxiety, depression, and physical pain they may experience from fertility care.
The Mental Health of Employees
It was never more clear that health care workers face enormous stress levels than during the COVID-19 pandemic. But we don't need a global health threat to understand that medical occupational stress can impact performance, efficiency, and patient outcomes. And in fertility care, that's understandable, as our care teams are so emotionally invested in each patient's journey.
Health care workers are also known to put the well-being of their patients before themselves. As a health care leader, it's your responsibility to ensure your team gets the support they need to be their best selves for their patients.
The American Psychological Associations' 2022 Work and Well-being Survey found that 81% of respondents are seeking companies that support mental health programs for their employees. The benefits of supporting employee mental health go beyond hiring and retaining talent.
Fostering an empowering environment that promotes wellness initiatives demonstrates to employees that their well-being is essential to the organization. This can help Improve employee morale and have a trickle-down effect on patients, who in turn get the best, most caring service.
A Wellness Framework
Organizations in any industry, especially health care, should consider developing a wellness framework to address employee mental health to include:
Employee assistance programs that supply team members and their families access to mental health support services, such as free counseling sessions and virtual events led by mental health experts. This initiative has been well-received by our team members.
Comprehensive health benefits that include resources for quality mental health care services and treatments
Robust internal communications so that all members of an organization feel like they're on the same team working for the same cause
Dedicated time to celebrate team members' roles and their accomplishments. Through Employee Experience Week, we celebrate the achievements of our team members, recognize their talents and what they bring to the company (i.e., Embryology Day), and schedule fun events and activities where we can connect on a level outside of everyday work. We also celebrate our team members on a regular basis and throughout the company.
DEI efforts that celebrate different perspectives, experiences, and backgrounds. This type of support can help fuel a compassionate workplace where employees feel safe, heard, and valued.
At Inception, we have seen incredible results from our own efforts to Improve mental health and wellness initiatives. And it's not just us; research has shown a direct correlation between the employee experience and the patient experience. If you genuinely want your patients to walk away from their medical experiences with the most outstanding level of satisfaction, look at your team members and patients with a 360-degree lens that includes their mental and emotional well-being.
Tue, 30 May 2023 01:02:00 -0500TJ Farnsworthentext/htmlhttps://www.newsweek.com/how-mental-health-services-can-improve-patient-employee-experience-1802491City's medical marijuana policy for employees remains unchanged, to the chagrin of some
Nearly five months after City Councilor Grant Miller suggested the city change its policies to allow employees to use medical marijuana as they would any other prescription medication, the idea remains a Topic of conversation — and of some frustration — among city leaders.
Miller first broached the subject in January when councilors and Mayor G.T. Bynum met to set their priorities for the year. Miller met with Bynum and other city officials in March to discuss the matter further.
“It was a short meeting,” Miller said. “For my mind, the purpose of the meeting was to bring all of those folks together who might be affected or impacted by this … and just sit them down and then find out what information it is that they need in order to move forward with some kind of a policy.”
Miller, who is a licensed cannabis grower, said what he took away from the meeting was that “there was very much a willingness to explore what is possible and that we could probably at some point find some middle ground.”
People are also reading…
He stressed that city officials gave no indication that they “are open to actually changing the policy” but did show an openness “to exploring how we could do that if we were willing to.”
Currently, the city can test employees for drug use and can discipline an employee if THC metabolites are detected.
Oklahomans voted in 2018 to approve State Question 788, which legalized the use of medical marijuana for those with a doctor's recommendation.
In making his proposal in January, Miller cited concerns he’d heard from Tulsa firefighters about the potential harmful effects of opioids and their desire to have cannabis available as a safer option for certain medical conditions.
“We have got doctors handing out prescriptions to city staff and to firefighters for the very same thing we are allegedly trying to combat,” Miller said. “It’s a big problem.”
Matt Lay, president of Tulsa Firefighters IAFF Local 176, said post-traumatic stress, sleep disorders, anxiety and chronic pain are common causes of physical and mental health problems among firefighters.
In the last 15 years, Lay said, four active firefighters have died from suicide or overdoses.
“In all of those situations, you had the presence of opioids,” he said, adding, “If you are telling me that there is a safe and effective alternative to an opioid, why wouldn’t we want firefighters to have access to that?”
Bynum said Thursday that he remains open to discussion and consideration of the issue.
“This is a medical policy issue that impacts hundreds of firefighters who are responsible for protecting the lives of 400,000 Tulsans,” he said. “This is not something I will move on casually or without thorough evaluation.”
Lay said the current policy is problematic because it lacks clarity and is applied arbitrarily.
“That is part of what has been problematic — that range from zero discipline and maybe a referral to an employee assistance program all the way through a termination on a first offense,” Lay said.
Little has changed in the firefighters’ drug and alcohol policy since it was adopted in the mid-1990s, Lay said.
“It is very disappointing that a city like Tulsa, that claims to be progressive, should have such archaic views towards something that more than two thirds of Tulsans support and have voted to adopt at the state level,” he said.
Lay pointed to a 2021 survey of 516 likely Tulsa general election voters in which 67% of respondents said they would support a ballot measure allowing firefighters to use medical marijuana if recommended by a doctor and used only while off duty.
Of the 67% of respondents who said they would support such a measure, 46.5% said they would strongly support it and 20.5% said they would somewhat support it.
The survey, commissioned by Local 176 and conducted by Cygnal, had a margin of error of plus or minus 4.31 percentage points.
Lay said the union recently agreed to a tentative agreement with the city on a fiscal year 2024 contract but that the subject of medical marijuana was never on the table.
“We were informed by the city’s negotiator that it was a nonstarter as far as the city was concerned,” he said.
Miller expressed disappointment in the lack of movement on the matter, saying city leaders seem to want things to stay the same.
"They are not interested in giving firefighters an opportunity to use alternative medications and want them to stay on these pain pills and stuff," the councilor said.
Bynum said the medical marijuana issue is an evolving one and that the city did not want to unduly delay firefighters’ getting the raises they deserved while one policy issue was being evaluated.
“So we did not consider it for this contract,” he said. “As I have told Councilor Miller and others, I remain open-minded on the issue. The main challenge right now is that both our city physician and our fire chief do not believe we could safely implement such an option when dosages are not federally monitored and regulated in the same way other medications typically are.
“I do not want to do anything that would put citizens or firefighters at risk, so we have to work through that particular concern.”
The drug policy for firefighters, as spelled out in their collective bargaining agreement and administrative operating procedures, is nuanced.
It reads, in part: "Normally, a non-probationary employee with a previously satisfactory work record will be given one (and only one) opportunity to continue employment after an initial occurrence of a positive drug or alcohol test where such testing was required by the City."
Generally speaking, a firefighter who violates the policy the first time remains on the job but is subject to discipline. The individual will be subject to random and/or periodic drug testing and must participate in an Employee Assistance Program.
A firefighter who violates the policy a second time is subject to termination.
Firefighters are tested for alcohol, marijuana metabolites and cannabinoids, opiates, synthetic and semi-synthetic narcotics, cocaine, amphetamines and PCP.
City officials stress that they follow applicable laws for all employees and adhere to collective bargaining processes.
Fire Chief Michael Baker was among the city officials who met with Miller in March to discuss the issue.
“I am really kind of still studying it, to be honest,” Baker said. “The question I have is: How is it going to impact what we can do to help our people? How would you manage it effectively?
“The most important concern for me, whether it is this Topic or any topic, is: How does it impact the trust that the public has in us?”
The new Tulsa World app offers personalized features. download it today.
Marijuana violations took over 10,000 truck drivers off the road last year, adding more supply chain disruptions
Marijuana violations have taken over 10,000 truck drivers off the road this year, adding more supply chain disruptions
Truck drivers are being tested more and the consequences for drug-related violations have increased
Differing marijuana laws by state are causing confusion among truck drivers
Truck drivers with violations tend to not return, adding to the shortage and supply chain woes
City employee drug testing policy summary
The Tulsa World compiled the following mock test to provide a general overview of the city of Tulsa's drug testing policy.
Q. When was it adopted?
A. In 2018, after Oklahoma voters approved State Question 820, legalizing medical marijuana.
Q. Which employees are tested?
A. All new applicants who have been offered a city position are tested for marijuana metabolites, opiates including semisynthetic, cocaine, amphetamines and PCP. As part of the city's testing process, only commercial driver's license holders are tested for barbiturates and benzodiazepines.
Q. What happens if applicants for safety-sensitive jobs test positive for marijuana metabolites?
A. If an applicant for a safety-sensitive job tests positive for marijuana metabolites, the person cannot continue through the hiring process, even if he or she has a valid medical marijuana card. However, the person could apply for a non-safety-sensitive position.
Q. What happens if an employee tests positive for medical metabolites?
A. If an employee is applying for a non-safety-sensitive job, tests positive for marijuana metabolites as part of the drug screening process, and can produce a medical marijuana card, the city will not take any action.
It’s important to note that the only employees subject to random drug tests are those with safety-sensitive jobs, while all employees are subject to drug tests for reasonable suspicion.
The first time an employee tests positive, the employee could be terminated due to their probationary status and/or work history, or the city could enter into a last-chance agreement with that employee, which comes with substance abuse counseling and random testing. If a last-chance agreement is in play and the employee meets the requirements of the last-chance agreement, he or she can return to work.
If the person tests positive again before the last-chance agreement expires, the employee is scheduled for a pretermination hearing.
Employees who test positive again after successfully completing a last-chance agreement for a first violation would be subject to a disciplinary review that could result in termination.
Q. What about police officers, firefighters and 911 employees?
A. Their drug policies are determined through the collective bargaining process.
Get local news delivered to your inbox!
Mon, 05 Jun 2023 00:23:00 -0500entext/htmlhttps://tulsaworld.com/news/local/marijuana/citys-medical-marijuana-policy-for-employees-remains-unchanged-to-the-chagrin-of-some/article_9b0a3f50-ff27-11ed-826a-f79258fbfe63.htmlGenerative AI Is Stoking Medical Malpractice Concerns For Medical Doctors In These Unexpected Ways, Says AI Ethics And AI Law
In today’s column, I will be examining how the latest in generative AI is stoking medical malpractice concerns for medical doctors, doing so in perhaps unexpected or surprising ways. We all pretty much realize that medical doctors need to know about medicine, and it turns out that they also need to know about or at least be sufficiently aware of the intertwining of AI and the law during their illustrious medical careers.
Over the course of a medical doctor’s career, they are abundantly likely to face at least one or more medical malpractice lawsuits. This is something that few doctors probably supply much thought to when first pursuing a career in medicine. Yet, when a medical malpractice suit is inevitably brought against them, the occurrence can be of a cataclysmic impact on their perspective on medicine and a stupefying emotional roller coaster in their life and their livelihood.
A somewhat staggering statistic showcases the frequency and magnitude of medical malpractice lawsuits in the U.S.:
“Medical malpractice litigation is all too common in the United States, with an estimated 17,000 medical lawsuits filed annually, resulting in approximately $4 billion in yearly payments and expenditures” (source: “Hip & Knee Are the Most Litigated Orthopaedic Cases: A Nationwide 5-Year Analysis of Medical Malpractice Claims” by Nicholas Sauder, Ahmed Emara, Pedro Rull an, Robert Molloy, Viktor Krebs, and Nicolas Piuzzi, The Journal of Arthroplasty, November 2022).
The fact that 17,000 medical malpractice lawsuits are filed each year might not seem like a lot, given that there are approximately 1 million medical doctors in the USA and thus this amounts to just around 2% getting sued per year, but you need to consider that this happens year after year. It all adds up. Basically, over a ten-year period that would amount to around 20% of medical doctors getting sued (assuming we smooth out repeated instances). While over a 40-year-long medical career, the odds would seemingly rise to around 80% (using the same assumptions).
A research study that widely examined medical malpractice lawsuits in the U.S. made these salient points about the chances of a medical doctor experiencing such a suit and also clarified what a medical malpractice lawsuit consists of:
“A study published in The New England Journal of Medicine estimated that by the age of 65 years, 75% of physicians in low-risk specialties would experience a malpractice claim, rising to 99% of physicians in high-risk specialties.
“Medical malpractice claims are based on the legal theory of negligence. To be successful before a judge or jury in a malpractice case, the patient-plaintiff must show by a preponderance of the evidence (it is more likely than not, i.e., there is a >50% probability that professional negligence did occur based on the evidence presented) the physician-defendant had a duty to the patient to render non-negligent care; breached that duty by providing negligent care; this breach proximately caused the injury or damage; And the patient suffered injury or damages” (source: “Understanding Medical Malpractice Lawsuits” by Bryan Liang, James Maroulis, and Tim Mackey, American Heart Association, Stroke, March 2023).
If you were to place each medical malpractice lawsuit into its relevant categories of the claimed basis for the litigation, you would see something like this as falling into these groupings (note that each case can be listed in more than just one category):
Estimated 31% of medical malpractice cases: Delayed diagnosis and/or failure to properly diagnose.
Estimated 29% of medical malpractice cases: Devised treatment gives rise to adverse complications.
Estimated 26% of medical malpractice cases: Adverse outcomes arise that lead to worsening medical conditions.
Estimated 16% of medical malpractice cases: Delay in timely treatment and/or failure to sufficiently treat.
Estimated 13% of medical malpractice cases: Wrongful death.
Other Various Reasons: Medication errors, improper documentation, lack of suitable informed consent, etc.
We will explore how each of those categories relates to the use of generative AI by a medical doctor.
Before doing so, it might be worthwhile to consider the grueling gauntlet associated with a medical malpractice lawsuit.
Generally, a patient or others associated with the patient are likely to indicate to the medical doctor that are considering a formal filing concerning the perceived adverse medical care provided by that medical doctor (in some instances, this might instead appear out of the blue). The hint or suggestion can then lead to a filing of legal pleadings and the official initiation of the medical malpractice lawsuit.
A medical doctor would then have a series of meetings with their legal counsel and likely their malpractice medical insurer, plus others in their medical care circle or sphere. At some point, assuming the case continues, a pleading judgment would be rendered by the court. If the case further continues then there would be a period of evidentiary discovery associated with the matter, a trial, and depending upon the outcome a chance of appeal might be undertaken too.
Throughout that lengthy process, a medical doctor is usually still fully underway in their medical endeavors. They need to simultaneously cope with their already overloaded medical workload and provide ongoing and ostensibly disruptive attention and energy toward the medical malpractice lawsuit. Their every thought and action associated with the medical case in dispute will be closely scrutinized and meticulously questioned. This can be jarring for medical doctors that are not used to being openly challenged in an especially antagonistic adversarial manner (versus a perhaps day-to-day normal collegial style).
Given the above background, let’s next take a look at how generative AI fits into this picture.
Generative AI In The Realm Of Medical Doctor Advisement
I’d guess that you already know that generative AI is the latest and hottest form of AI. There are various kinds of generative AI, such as AI apps that are text-to-text or text-to-essay in their generative capacity (meaning that you enter text, and the AI app generates text in response to your entry), while others are text-to-video or text-to-image in their capabilities. As I have predicted in prior columns, we are heading toward generative AI that is fully multi-modal and incorporates features for doing text-to-anything or as insiders proclaim text-to-X, see my coverage at the link here.
In terms of text-to-text generative AI, you’ve likely used or almost certainly heard about ChatGPT by AI maker OpenAI which allows entry of a text prompt and the AI generates an essay or interactive dialogue in response. For my elaboration on how this works see the link here. The usual approach to using ChatGPT or other similar generative AI is to engage in an interactive dialogue or conversation with the AI. Doing so is admittedly a bit amazing and at times startling at the seemingly fluent nature of those AI-fostered discussions that can occur.
Please know though that this AI and indeed no other AI is currently sentient. Generative AI is based on a complex computational algorithm that has been data trained on text from the Internet and admittedly can do some quite impressive pattern-matching to be able to perform a mathematical mimicry of human wording and natural language.
Into all of this comes a plethora of AI Ethics and AI Law considerations.
There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.
The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.
A medical doctor is likely to be especially intrigued by generative AI.
A lot of publicity in the medical community seemed to arise when a study earlier this year proclaimed that generative AI such as ChatGPT was able to pass the written test known as the United States Medical Licensing test (USMLE) at a roughly 60% accuracy rate. Here’s what the researchers said:
“Artificial intelligence (AI) systems hold great promise to Improve medical care and health outcomes. As such, it is crucial to ensure that the development of clinical AI is guided by the principles of trust and explainability. Measuring AI medical knowledge in comparison to that of expert human clinicians is a critical first step in evaluating these qualities. To accomplish this, we evaluated the performance of ChatGPT, a language-based AI, on the United States Medical Licensing test (USMLE). The USMLE is a set of three standardized tests of expert-level knowledge, which are required for medical licensure in the United States. We found that ChatGPT performed at or near the passing threshold of 60% accuracy. Being the first to achieve this benchmark, this marks a notable milestone in AI maturation. Impressively, ChatGPT was able to achieve this result without specialized input from human trainers” (source: “Performance of ChatGPT On USMLE: Potential For AI-assisted Medical Education Using Large Language Models” by Tiffany Kung, Morgan Cheatham, Arielle Medenilla, Czarina Sillos, Lorie De Leon, Camille Elepaño, Maria Madriaga, Rimel Aggabao, Giezel Diaz-Candido, James Maningo, and Victor Tseng, PLOS Digital Health, February 9, 2023).
Medical doctors likely raised their eyebrows at the fact that generative AI can seemingly pass an arduous standardized medical exam.
Rather obvious questions immediately come to mind:
Does this suggest that generative AI might be coming for my job, some doctors undoubtedly asked, namely AI that performs medical analyses and dispenses medical advice?
Am I going to be replaced by generative AI or will I instead be acting in conjunction with generative AI on my medical diagnoses and medical advisement?
Should I start looking into using generative AI right away and not wait until I am career-wise disrupted or caught off-guard?
What is the most sensible or prudent use of generative AI for medical work as a medical doctor?
The American Medical Association (AMA) has promulgated a terminology that this type of AI ought to be referred to as augmented intelligence:
“The AMA House of Delegates uses the term augmented intelligence (AI) as a conceptualization of artificial intelligence that focuses on AI’s assistive role, emphasizing that its design enhances human intelligence rather than replaces it” (source: AMA website).
Let’s for the moment set aside the notion of an autonomous version of generative AI that functions entirely without any human medical doctor involvement. I’m not suggesting this isn’t in the future and only seeking to conveniently narrow the discussion herein to when generative AI is used in an assistive mode.
I’ve put together an extensive list of the benefits associated with a medical doctor opting to use generative AI. In addition, and of great importance, I have also assembled a list of the problems associated with a medical doctor using generative AI. We need to consider both the problems and downsides and weigh those against the benefits or upsides. To clarify, I could say that in the other direction too, namely that we need to consider the benefits or upsides in light of the problems or downsides.
Life seems to always be that way, involving calculated tradeoffs and ROIs.
I’ll explore the benefits first, just because it seems a more cheerful way to proceed. The problems or downsides will be explored next. Finally, after examining those two counterbalancing perspectives, we will jump into the medical malpractice specifics about the use of generative AI by a medical doctor.
Hang onto your hats for a bumpy ride.
Touted Benefits Of Generative AI Usage By Medical Doctors
Think of generative AI as being much different than merely doing an online search for medical info such as via a conventional web browser (note that the existing browsers are starting to encompass generative AI capabilities, see my coverage at the link here). A traditional web browser will bring back tons of hits that you need to battle through. Some of the found instances will be useful, some will be useless. Worse still, some of the search engine findings might be wrought with misleading medical info or outrightly wrong medical info.
Generative AI is supposed to be an interactive dialogue-oriented experience. You interact with the generative AI. That being said, you can simply enter a prompt such as a patient profile, and ask the generative AI to do a medical analysis for a one-time emitted essay, but that’s not the productive way to use these AI apps. The full experience consists of going back and forth with the generative AI. For example, you enter a patient profile and ask for a diagnosis. The AI responds. You then question the diagnosis and ask further questions. It is supposed to be highly interactive.
Another angle for using generative AI would be for a medical doctor to enter a devised diagnosis and ask the AI app to critique or review the proposed advisement. This once again should proceed on an interactive basis. The generative AI might question whether you considered this or that medical facet. You respond. All in all, the aim is to have a kind of double-check or at least a means to bounce ideas around to see whether you have exhaustively considered multiple possibilities.
Here are five major ways that I usually suggest medical doctors make use of generative AI, assuming they are interested in doing so:
1) Medical brainstorming: Use generative AI to kick around medical ideas and get outside of your own medical mental in-the-box constraints
2) Drafting medical content: Use generative AI to produce medical content for filling in forms or preparing needed medical documents
3) Reviewing medical scenarios: Use generative AI to assess and comment on medical propositions or scenarios
4) Summarizing medical narratives: Use generative AI to readily examine and summarize dense or lengthy medical content that you want to get the gist of
5) Converting medical jargon into plain language: Use generative AI to convert hefty medical jargon into plain language that can be conveyed to patients or patient families
There are numerous other uses of generative AI for medical doctors. I’m merely noting the seemingly more common uses and ones that can be done with relative ease.
You are now primed for my list of beneficial uses of generative AI for medical doctors in the boundaries of medical decision-making and medical decision support:
Benefit that the generative AI can potentially focus on the particulars of a given patient and thus be far more applicable and specific than broader medical info available online.
Benefit that the generative AI might be more well-rounded in medical facets than seeking advice from a particular medical colleague of a narrow specialty.
Benefit is that the generative AI might be more detailed and pinpointed to deep medical specifics than seeking the advice of a medical colleague of a broader capacity.
Benefit that the generative AI is available 24x7 with no delay in access versus seeking advice from a busy or unavailable colleague.
Benefit is that the generative AI might be updated with the latest in medical content and be ahead of where a medical doctor presently is familiar with the state-of-the-art in medicine.
Benefit is that the generated indications can be readily digitally stored and later retrieved when needed versus verbal conversations with colleagues that are later subject to hindsight interpretation.
Benefit that generative AI can bring together vast troves of disparate medical info and consolidate and select for a particular case at hand.
Benefit is that generative AI can aid in filling out needed medical forms and medical documentation, reducing the paperwork time and energy consumption typically required of a medical doctor.
Benefit is that generative AI can serve as a sounding board to perform medical scenario analyses and aid in ascertaining the most advisable medical path.
Benefit is that generative AI can be a brainstorming tool to inspire out-of-the-box medical considerations that a medical doctor might otherwise not have considered.
Benefit is that generative AI can do a first-pass review of a proposed medical diagnosis or tentative medical decision and provide valuable food for thought to the medical doctor.
Benefit is that the generative AI can serve as a learning aid to enable a medical doctor to get quickly up-to-speed on needed medical matters.
Benefit is that the generative AI might detect and alert a medical doctor to their own potential medical errors and omissions.
Benefit that the generative AI might discern obscure or extraordinary medical circumstances as though a Dr. House in-a-box amplifier that otherwise might have been skipped or unnoticed.
Benefit is that if called upon to explain a medical decision that a medical doctor might refer to the generative AI when discussing medical matters with patients and their families, doing so as a means of reassuring them about the validity of the medical decisions made.
Benefit that patients and patient families will potentially use generative AI to try and understand the medical facets being undertaken by a medical doctor and ergo reduce the usurping of the time by the medical doctor to explain the medical underpinnings.
Benefit is that the generative AI might do a better job at explaining medical matters than a medical doctor and provide a secondary added bedside complementary function for the medical doctor.
Benefit is that the generative AI might be inspirational for a medical doctor to leverage the latest in high-tech for seeking the best medical care for their patients.
Benefit that if faced with a medical malpractice lawsuit that the medical doctor might be able to bolster their medical stance by referring to the use of generative AI as an additional tool showcasing the extent and depth of the medical decision-making process.
I snuck into that foregoing list an indication about potentially using generative AI as a means of later bolstering your position during a medical malpractice lawsuit.
Let’s revisit my earlier indication about the categories associated with medical malpractice lawsuits and consider how generative AI might have been able to avoid or overcome the noted lamentable outcomes:
Delayed diagnosis and/or failure to properly diagnose: Use of generative AI might have sped up the time needed to do the diagnosis and/or might have guided or double-checked the medical doctor toward a proper diagnosis, thus averting the adverse outcome.
Devised treatment gives rise to adverse complications: Use of generative AI might have forewarned the medical doctor about adverse complications that could arise due to the treatment and that weren’t otherwise foreseen or failed to be conveyed to the patient.
Adverse outcomes arise that lead to worsening medical conditions: Use of generative AI might have identified or noted the worsening medical conditions on a trending basis that the medical doctor might otherwise not have readily ascertained.
Delay in timely treatment and/or failure to sufficiently treat: Use of generative AI might provide a sense of needed timing for treatment and/or might note that sufficient treatment is not seemingly taking place.
All in all, those benefits assuredly seem quite convincing.
How would any medical doctor not be using generative AI, given the litany of benefits listed?
We next turn toward the set of problems associated with using generative AI by medical doctors. This will aid us in weighing the upsides versus the downsides.
Touted Downsides Of Generative AI Usage By Medical Doctors
I am going to present to you a slew of potential downsides or problems associated with using generative AI by medical doctors.
Pundits that believe wholeheartedly in the use of generative AI by medical doctors will have a bit of heartburn when they see the list. They will almost certainly object that many of the downsides or listed problems can be overcome. To some extent, yes, that is true.
We also need to acknowledge that the benefits that I just listed are also readily undermined or attacked too. For each of the benefits that I listed, you can easily find ways to undercut the stated benefit. Some of those benefits might seem to be the proverbial pie-in-the-sky. They might happen, though the odds of the benefit arising are scarce as hen’s teeth, some would insist.
Fair is fair.
Moving into the potential downsides, let’s take a look at one notable use case, and then we’ll see the entire list. One of the biggest problems or downsides of today’s generative AI is that it is well-known that these AI apps can produce errors, falsehoods, be biased, and even wildly make-up things in what are considered AI hallucinations (a terminology that I disfavor, for the reasons stated at the link here).
Imagine then this scenario. A medical doctor is using generative AI for medical analysis purposes. A patient profile is entered. The medical doctor has done this many times before and has regularly found generative AI to be quite useful in this regard. The generative AI has provided helpful insights and been on-target with what the medical doctor had in mind.
So far, so good.
In this instance, the medical doctor is in a bit of a rush. Lots of activities are on their plate. The generative AI returns an analysis that looks pretty good at first glance. Given that the generative AI has been seemingly correct many times before and given that the analysis generally comports with what the medical doctor already had in mind, the generative AI interaction “convinces” the medical doctor to proceed accordingly.
Turns out that unfortunately, the generative AI produced an error in the emitted analysis. Furthermore, the analysis was based on a bias associated with the prior data training of the AI app. Scanned medical studies and medical content that had been used for pattern-matching were shaped around a particular profile of patient demographics. This particular patient is outside of those demographics.
The upshot is that the generative AI might have incorrectly advised the medical doctor. The medical doctor might have been lulled into assuming that the generative AI was relatively infallible due to the prior repeated uses that all went well. And since the medical doctor was in a rush, it was easier to simply get a confirmation from the generative AI, rather than having to dig into whether a mental shortcut by the medical doctor was taking place.
In short, it is all too easy to fall into a mental trap of assuming that the generative AI is performing on par with a human medical advisor, a dangerous and endangering anthropomorphizing of the AI. This can happen through a step-by-step lulling process. The AI app also is likely to be portraying the essays or interactions in a highly poised and confidently worded fashion. This is also bound to sway the medical doctor, especially if under a rush to proceed.
Take a deep breath and take a gander at this list of potential pitfalls and problems when generative AI is used by a medical doctor:
Problem of generative AI errors, biases, falsehoods, and AI hallucinations that could mislead or confound whatever medical advisement or essay is being generated for use.
Problem of lack of producible or cited documented supporting references for the generated essays and interactive dialoguing of generative AI.
Problem of cited documented supporting references that are AI hallucinations or otherwise do not exist and yet are portrayed as factual and real.
Problem is that generic generative AI is data-trained generally on the Internet and not to the specifics of medical content.
Problem is that medical content scanned during the Internet training might not be of a bona fide medically sound nature.
Problem is that the generative AI might be frozen in time and not have scanned the latest in medical content available on the Internet.
Problem is that the medically scanned Internet content might be from narrow sources or fail to encompass a wide enough range of bona fide medical materials.
Problem is that the scanned bona fide medical materials of the Internet might be improperly pattern-matched as to overstating or understating what the medical content imbues.
Problem is that the generative AI is solely a mathematical and computational pattern-matching of existing writing on medical matters and is not sentient and has no semblance of common sense, human understanding, etc.
Problem is that the generative AI is not tailored to the specifics of medical diagnoses and medical decision-making and in a sense is out of its league when it comes to the medical domain.
Problem is that a medical doctor using generative AI needs to adequately and sensibly use the generative AI such as via so-called prompt design or prompt engineering else the effort might inadvertently become counterproductive.
Problem is that generative AI functions on a probabilistic basis and the essays and interactive dialogue are likely to change and not be repeatable or reliably consistent.
Problem is that the generative AI has not likely been subjected to medical peer review or other measurements to ensure medical accuracy.
Problem is that the context online storage limitations of the generative AI might subtlety and without notification shortchange the medical analysis that is being conveyed or discussed.
Problem is that a medical doctor might be lulled into assuming that the generative AI is correct and ergo overly rely misleadingly upon the essay or interactive dialogue.
Problem is that a medical doctor in a hurried or overworked mindset might fail to sufficiently double-check the generative AI-emitted medical indications.
Problem is that the entry of patient-related information by a medical doctor into generative AI might be a privacy intrusion and a violation of HIPAA.
Problem is that the entry of patient-related information into generative AI might be onerous to undertake and become another paperwork time-consuming drain for medical doctors.
Problem is that a medical doctor might be forcibly required to use generative AI in a hospital or medical setting even if the usage is potentially time-draining or counterproductive.
Problem is that this use of generative AI is considered potential life-or-death and therefore abundantly risky and well-beyond what the AI maker devised or intended.
Problem is that the use of generative for medical decision-making violates software licensing stipulations of the AI maker and puts the medical doctor and medical provider in a tenuous legal posture.
Problem is that if called upon to explain a medical decision that a medical doctor might refer to the generative AI as though it was a coherent medical advisor and upset patients and patient families as to a lack of suitable medical human judgment involved.
Problem is that patients and patient families will potentially use generative AI to try and second-guess a medical doctor and raise concerns that are based on faulty considerations.
Problem is that the use of generative AI by a medical doctor can open up new avenues of medical malpractice and enter into an untested medical-legal realm that is murky and nascent.
I’ll highlight a few of those points.
The use of generative AI for private or confidential information is something that you need to be especially cautious about. Entering patient-specific info could be a violation of HIPAA (Health Insurance Portability and Accountability Act) and lead to various legal troubles. For more on how generative AI is potentially lacking in privacy and cybersecurity, see my coverage at the link here.
Another issue is whether generative AI is allowed to be used for medical purposes, to begin with. Some of the software licensing agreements explicitly state that medical professional use is not allowed. This once again can raise legal issues. See my discussion about prohibited uses of generative AI at the link here.
Each of the problematic or downside points in the list above is worthy of a lengthy elaboration about what they are and how they can be overcome. I don’t have space to cover this in today’s column, but if there is sufficient reader interest I’ll gladly go into more depth in later columns.
The Medical Malpractice Dual-Edged Sword Of Generative AI Use
I will finish up this discussion by noting the dual-edged sword of generative AI use in the medical domain and how this relates to medical malpractice considerations.
First, a latest paper posted in the Journal of the American Medical Association (JAMA) identified various key facets of medical malpractice associated with generative AI:
“The potential for large language models (LLMs) such as ChatGPT, Bard, and many others to support or replace humans in a range of areas is now clear—and medical decisions are no exception. This has sharpened a perennial medicolegal question: How can physicians incorporate promising new technologies into their practice without increasing liability risk?”
“The answer lawyers often supply is that physicians should use LLMs to augment, not replace, their professional judgment. Physicians might be forgiven for finding such advice unhelpful. No competent physician would blindly follow model output. But what exactly does it mean to augment clinical judgment in a legally defensible fashion?” (source: “ChatGPT And Physicians’ Malpractice Risk” by Michelle M. Mello and Neel Guha, JAMA Health Forum, May 18, 2023)
The noted emphasis was on how to incorporate generative AI into a medical doctor’s practice without increasing liability risk. A vital recommendation is that medical doctor needs to realize that they cannot and should not blindly abide by whatever the generative AI emits. This though, as noted, would generally be something that a medical doctor would likely already assume to be the case.
The devil is in the details.
A day-to-day use of generative AI is a lot different than a once-in-a-blue-moon usage. There is a tendency in day-to-day routinization to become complacent and fall into the mental trap of being less skeptical about what the generative AI is producing. The list of problems or downsides that I’ve shown earlier is a sound basis for being cautious about whether to adopt generative AI or not.
The authors also provided this recap of their overarching viewpoint on the matter:
“The rapid pace of computer science means that every day brings an improved understanding of how to harness LLMs to perform useful tasks. We share in the general optimism that these models will Improve the work lives of physicians and patient care. As with other emerging technologies, physicians and other health professionals should actively monitor developments in their field and prepare for a future in which LLMs are integrated into their practice” (ibid).
We need to also consider what medical malpractice lawyers are going to do in response to the advent of generative AI for use by medical doctors.
Here’s what I mean.
One cogent legal argument is that the use of generative AI demonstrably caused an undue increase in risk associated with the performance of a medical doctor. That’s an obvious line of attack. If a medical doctor relied upon generative AI, an assertion can be made that they are expressly embodying a heightened risk due to the slew of downsides or problems that I’ve listed herein.
Let’s turn that same argument around.
Suppose a medical doctor did not make use of generative AI. This would at first glance seem clearly to be the safest means to avoid any complications about how generative AI entered into a malpractice setting. You didn’t use generative AI so it cannot seemingly be an issue at hand. Period, end of story.
A counterargument would be that if the medical doctor had in fact made overt use of generative AI, the medical doctor might not have made the malpractice failure that they are alleged to have made. Per the benefits listed earlier about generative AI, it is conceivable that the generative AI would have nudged or pushed the medical doctor to not have done whatever faltering act they supposedly did.
That is a mind-bending conundrum.
Is it best to avoid professional negligence in a medical doctor setting by avoiding generative AI altogether, or could this become a contentious issue that if generative AI had been used then the professional negligence would (arguably) not have occurred?
The arising expectation or pressing argument might be that medical doctors should be taking advantage of viably available and useful tools including generative AI in their medical practice efforts. Failing to keep up with a tool that could make a substantive difference in performing medical work would, or could, be portrayed as a lack of attention to modern medical practices. A veritable head-in-the-sand claimed argument might be somewhat of a stretch in today’s wobbly status of generative AI, but as generative AI gets more tuned and customized to medical domains, this would seem to loom larger on the docket.
A medical doctor might increase risk by adopting generative AI. On the other hand, they might be failing to mitigate risk by not adopting generative AI. Generative AI could be construed as a crucial risk management component for practicing modern medicine. Yes, in short, it could be argued with vigor that generative AI when used suitably could be said to decrease risk.
There you have it, a dual-edged sword.
I offer a few concluding remarks on this engaging topic.
I would wager that just about everyone has heard of the Hippocratic Oath, namely the famed oath taken by medical doctors tracing back to the Greek doctor Hippocrates. This is a longstanding and oft-quoted dictum. The particular catchphrase of “First do no harm” is associated with the Hippocratic Oath, meaning that medical doctor is obligating themselves to stridently seek to help their patients and assiduously do what they can to avoid harming their patients.
You might say that we are on a precipice right now about generative AI fitting into the Hippocratic Oath.
Using generative AI can be argued as veering into the harming territory, while a counterargument is that the lack of using generative AI is where the harm actually resides. Quite a puzzle. Darned if you do, darned if you don’t. Right now, the darned if you do is tending to outweigh the darned if you don’t. This equation might gradually and eventually flip over to the other side of that coin.
I’d like to end this discussion on a lighter note, so let’s shift gears and consider a future consisting of sentient AI, also referred to as Artificial General Intelligence (AGI). Imagine that we somehow attain sentient AI. You might naturally assume that this AGI would potentially be able to take on the duties of being a medical doctor. It seems straightforward to speculate that this would occur (i.e. if you buy into the sentient AI existence possibility).
Mull over this deep thought.
Would we require sentient AI to take the Hippocratic Oath, and if so, what does this legally foretell as to holding the sentient AI responsible for its medical decisions and its devised performance as an esteemed medical doctor?
A fun bit of contemplative contrivance, well, until the day that we manage to reach sentient AI. Then, we’ll be knee-deep serious about the matter, for sure.
Mon, 22 May 2023 23:00:00 -0500Lance Eliotentext/htmlhttps://www.forbes.com/sites/lanceeliot/2023/05/23/generative-ai-is-stoking-medical-malpractice-concerns-for-medical-doctors-in-these-unexpected-ways-says-ai-ethics-and-ai-law/AI Anxiety: How These 20 Jobs Will Be Transformed By Generative Artificial Intelligence
The explosion of interest in generative artificial intelligence (AI) applications has left many of us worried about the future of work. While it has exciting implications for transforming just about every industry, there is uncertainty about who might become redundant and what skills we will need to remain useful in the future.
The truth is just about every job that requires creativity or working with information is likely to be affected in some way. It's essential to understand that generative AI is a tool, and those who learn to harness its potential are those who are likely to prosper rather than find themselves being replaced. So here are some of the jobs that are certain to change, along with the tasks that you should learn to delegate to AI.
Customer Services Agents
ChatGPT and large language models like GPT-4 can be used to build chatbots that answer customer inquiries, create transcripts of calls and summaries of interactions, giving an instant overview of issues that are important or are causing problems for customers. It can provide automated and personalized responses and offer support in many different languages. It can also be used as a training system to simulate customer inquiries.
Marketing content, including blog posts, social media messaging, email campaigns, and product descriptions, can be automated. This content can be personalized to target different customer segments. It can also be used to automate the creation of marketing strategies and identify the most relevant targets and goals that marketers should aim for.
Doctors and other health professionals can use generative AI to generate and summarize medical reports based on patient data. It can summarize patient histories and even suggest diagnoses or treatments based on symptoms and patient presentations. Image-based generative AI can create simulated medical imagery such as X-rays, and CT scans to assist with the training of medical image recognition systems.
Journalists and writers can use generative AI to assist with writing reports by getting it to suggest outlines as well as determine the important facts that need to be covered in their reporting. They can also use it for research by having it create summaries of information or the latest developments in a field that they are covering.
Architects can feed in relevant information such as site dimensions, local building regulations, and availability of materials and get it to generate design ideas based on these criteria. Generative image-based AI can create initial design proposals or instructions for creating 3D models or 3D-printed prototypes to assist in visualizing and communicating their ideas to clients.
Generative language-based AI is proficient in creating computer code as well as human languages and can also suggest structures that should be used when creating programs, tools, and applications. It can debug code, quickly find errors and point out more efficient methods of achieving the desired results. It can also assist with creating technical documentation and explaining how code works, and as a tutor to help humans Improve their own coding skills.
Web and Graphic Designers
Web designers can use language-based generative AI to automate the creation of code, allowing them to spend more time on creative tasks. It can be used to quickly generate prototypes of websites or individual graphic design elements such as logos, which the designer can then apply their human design skills to in order to create a finished product. It can be used to quickly gather together ideas that are similar to the project the designer is working on in order to provide inspiration or insights into trends.
Tools like ChatGPT can quickly translate almost any human language. They understand different alphabets and scripts and can create personalized translations that target specific information within a source text to different audiences, depending on the amount and depth of the information they need.
Teachers can use generative AI to automate the creation of lesson plans, suggesting the best ways to teach subjects and highlighting the important information that needs to be communicated to students. It can automate the personalization of teaching materials for students of different levels of maturity or ability. It can create and grade tests, providing in-depth insights into the level of understanding of individual learners. It can also provide teachers with information and assistance with their own professional development, ensuring they are up-to-date with the latest teaching methodologies and resources.
Banking and Financial Services Professionals
Generative AI can be used to create financial reports, including assessing credit risk, fraud detection, and many of the routine reports that financial services professionals need to file regularly. It can be used to automatically process documents such as loan applications, KYC forms, or new account applications. It can also be used in training and professional development applications to ensure employees are aware of the latest regulations and compliance requirements affecting their roles.
Product designers can take advantage of generative design applications that automate the creation of design documents, blueprints, and prototypes by analyzing information such as material specifications and customer requirements. Then, it can automatically create instructions for 3D-printed prototypes or machining tools to go from natural-language input to prototype or even finished products.
Lawyers can use generative AI to research relevant case law and rulings and to automatically create summaries of information relevant to cases they are working on. By doing this, lawyers can vastly reduce the amount of time they spend going through documents and spend more face-to-face time with their clients, getting a more in-depth understanding of their individual requirements. It can also automatically generate contracts and other legal documents to personalized specifications.
Large language models can automatically analyze, review and summarize large and complicated datasets, providing overviews and insights. It can also automate the generation of reports communicating these insights, personalizing them to the individuals who need the information in a way that's specifically relevant to them, in a language they will understand.
Here, generative AI can be used to automate project planning, timelines, resource allocation, and risk management. It can assist with creating individual workflows for team members and with administrative tasks such as scheduling meetings and creating minutes, capturing key decisions and action points as they occur. It can be used to create reports that track performance during project delivery, automatically identifying obstacles and suggesting methods of optimizing processes and workflows.
Illustrators can use AI to generate inspiration in the form of sketches, as well as suggest elements such as color palettes and styles. It can help with keeping stylistic elements consistent across a set of images and provide image enhancements such as accurate shadowing and lighting. It can also help illustrators Improve their skills by suggesting areas for improvement.
Generative AI can help interior designers to visualize the way that they will transform spaces by analyzing inputs such as room dimensions, client preferences, and functional requirements. This can help designers to come up with a range of possibilities within a space. It can also provide information on new trends to allow them to provide up-to-date styles for their clients. It can help with material and product selection and provide 3D visualizations to help communicate their ideas to clients.
Generative AI can be used to analyze incoming support tickets and automatically assign them to categories or to agents best placed to handle them. In many instances, it may be able to provide automated responses. It can get to know the problems and pain points most frequently experienced and create automated FAQ documents with relevant solutions. It can also be used to train human agents by role-playing as customers or users with unique problems and analyze their performance, offering input on how they can Improve their support skills.
HR can be used to automate the creation of documents, including policy documents, employee handbooks, and onboarding guides. It can be used to communicate policy in any number of different languages, using content personalized to specific audiences, such as different job roles or levels of seniority. It can create surveys and questionnaires to help monitor employee satisfaction and analyze the results to create automated reports and feedback summaries. It can also be used during recruitment to determine which applicants have relevant skills by analyzing their resumes, personal statement, and application letters.
Generative AI can assist with gathering the latest information on whatever syllabus are being researched and creating summaries of the most important points or whatever information is relevant to the Topic at hand. It can analyze datasets and assist with the design of scientific experiments. It can create research reports and ensure that research is being carried out in line with legal and ethical standards.
Video Game Designers
Generative AI can be used to automate the generation of procedural content, creating environments and challenges that will engage the player. It can enable designers to fill their worlds with dynamic narratives and storytelling that react on-the-fly to players’ choices and actions. It can also enable them to populate their worlds with more realistic characters that react in a natural, believable way. And it can analyze player feedback, such as social media discussions or online reviews, in order to provide feedback on how well a game is received and highlight areas for improvement or bug-fixing.
Sun, 04 Jun 2023 17:47:00 -0500Bernard Marrentext/htmlhttps://www.forbes.com/sites/bernardmarr/2023/06/05/ai-anxiety-how-these-20-jobs-will-be-transformed-by-generative-artificial-intelligence/New law: Getting medical care in Florida subjected to ethical, moral, religious objections
Coming to a Florida doctor's office near you …
The telephone rings. A person answers.
Receptionist: Florida Conscience-Care Associates. May we help you?
Caller: Yes, our family just moved to Florida and we’re looking to find a medical practice here for us.
Receptionist: Welcome. Yes, we’d be happy to consider taking on you and your family as patients. First, I will need to get some information.
Caller: Sure. I figured that. We have good insurance. I have my card ready if you need me to read the policy numbers.
Receptionist: No, before we get to your insurance, I have to go over some items covered under the Protections of Medical Conscience Act.
Receptionist: It’s a new Florida law pushed by our Gov. Ron DeSantis that gives all the medical providers in the state the right to refuse providing health care services to others based on the provider’s own moral, ethical or religious beliefs.
Florida law puts personal beliefs over medical care
Caller: That doesn’t sound right.
Receptionist: Well, it’s the law. And it includes doctors, nurses, ambulance drivers, pharmacists, mental health professionals, lab technicians, 911 operators, nursing home workers, hospital administrators …
Caller: You mean they can all deny me and my family medical care based on a feeling?
Receptionist: As long as it's not an emergency. The law calls it a CBO. A conscience-based objection.
Caller: And what does that mean?
Receptionist: Whatever the medical provider wants it to mean. If somebody here in our office decides that his or her morals are being violated by providing patient care to you, well, you lose.
Caller: Oh, I get it. This is just about the LGBT stuff, right? I’ve heard that Florida was like this. Going after transgender people. Well, relax. My children are straight arrows. And they’ve never been to a drag queen story hour.
Receptionist: Let’s not get ahead of ourselves here. This process will work best if you allow me to do the proper screening.
Caller: You could deny me medical care based on other things?
Florida takes a "do some harm" approach to medical care
Receptionist: You mentioned you were married. Same-sex couple?
Caller: Why is that good? I mean, whose business is it whether my marriage partner is a man or a woman?
Receptionist: I don’t care. But Henry, he’s the one who draws blood in the office, well, he filed paperwork under the new law that says his strongly held religious beliefs about marriage prevents him from providing any medical services to people in same-sex marriages.
Caller: That’s just state-licensed bigotry.
Receptionist: Gov. DeSantis calls it stamping out “medical authoritarianism.”
Caller: What a crock of …
Receptionist: Be careful. This may be a recorded line. And he's got his own citizen militia now.
Caller: I thought Florida was supposed to be the “freedom state.”
Receptionist: Relax. I know, it’s disorienting at first. But you’ll get numb to it.
Caller: I read all that stuff about Florida turning into a fascist state but I thought it was just hyperbole.
Receptionist: Take deep breaths. You’re doing fine so far. I’m just going to continue through the list here to see what services we may or may not be able to offer you or your family here.
Caller: You mean there’s more?
Receptionist: We’re just getting started, ma’am.
Caller: Please refer me to another office that will treat me and my family without conditions.
Patients right to treatment lessened in new law
Receptionist: The new law says we don't have to do that.
Receptionist: Do your kids watch Disney movies?
Caller: Oh, for cryin’ out loud.
Receptionist: Especially that new animated one, Strange World, that features a bi-racial gay teen? That really annoys Sharon, the X-ray tech. She says she’s ethically conflicted doing scans for parents who allow their children to be “indoctrinated by the woke Disney agenda.”
Receptionist: I’m going to put you down for a “No.” Trust me, you don’t want to trigger Sharon. Her brother just got out of the can from Jan. 6 charges.
Caller: I never had to do this in New York.
Receptionist: Please don’t mention New York.
Caller: What’s wrong with saying I’m from New York?
Receptionist: If you mention New York, Roger the physician assistant will ask you if that means you’re a Democrat, and if you say, “yes,” he will refuse to provide medical services to you under the new law.
Caller: How is that even possible?
Receptionist: Howard is a QAnon. He sincerely believes that the nation’s top Democrats drink the blood of babies in the basement of a pizza restaurant. And so he has claimed a moral, ethical and religious exemption to treating the people who empower them.
Caller: Listen, my husband and I are just your average, every day, caring people who are raising four terrific children, and I can’t see why …
Receptionist: Whoa. Slow down. Did you say “four children?”
The Consortium of Medical, Engineering, and Dental Colleges of Karnataka's final answer key 2023 will be released tomorrow, June 6, as per the official schedule. Candidates who have appeared for COMEDK 2023 can check and download the COMEDK 2023 final answer key 2023 from the official website, i.e., comedk.org, once the link is active at 20 pm. And, the COMEDK 2023 rank card 2023 will be released on June 10.
As per the information, the provisional answer key was released on May 30, and the candidates were allowed to raise objections till June 1, 2023. The COMEDK 2023 examination was held on May 28, 2023.
HOW TO CHECK COMEDK 2023 ANSWER KEY
Step 1: Visit the official website, i.e., comedk.org.
Step 2: On the homepage, click on the link that reads, 'COMEDK 2023 Final Answer Key 2023'.
Step 3: A new page will appear on the screen with a PDF file.
Step 4: Candidates can view the final answer key and can download the same for future reference.
HOW TO download COMEDK 2023 RANK CARD
As per the website, the COMEDK 2023 rank cards will be released on June 10. Candidates can follow these steps to check and download the same:
Step 1: Visit the official website, i.e., comedk.org.
Step 2: On the homepage, click on the link that reads, 'COMEDK 2023 Rank Card'. (Once the link is active)
Step 3: A new page will appear on the screen.
Step 4: Enter the asked credentials and click on the submit option.
Step 5: Your COMEDK 2023 Rank Card will appear on the screen.
Step 6: download the same and take a printout of it for future reference.
AUSTIN, Texas – Americans in the Lone Star State weighed in on job displacement from artificial intelligence, with several telling Fox News they believe their jobs would eventually be replaced.
"A lot of coworkers or people that I know have been laid off at Indeed and things like that because they don't want to hire real people anymore," said Gabriel, who works in tech. "They would just rather do AI."
Advances in AI could cause up to 300 million jobs to be lost or diminished globally, Goldman Sachs predicted in a March 26 report. Artificial intelligence could create "significant disruption" across labor markets worldwide by fully or partially replacing humans in the near future, according to the analysis.
"For fast foods or … customer service, I believe they're gonna do AI in the future for that," she told Fox News. "But as nurses, as doctors, as any medical provider, there's no way they can replace an AI with the medical profession."
Yet ChatGPT may provide better medical advice than humans in some instances, according to a latest University of California San Diego study.
Researchers asked a group of doctors and ChatGPT to answer the same random demo of roughly 200 medical questions posted on Reddit. A separate panel of health care professionals evaluated the answers for "quality and empathy" and preferred ChatGPT's answers for nearly 80% of the responses.
A robot waiter serves a woman at Kura Revolving Sushi Bar on Sept. 14, 2022, in Orlando, Florida. (Paul Hennessy/Anadolu Agency via Getty Images)
Dewey said he believed software engineering's "higher abstraction" roles might stave off AI displacement, at least for the next few decades.
"As far as writing code, AI will definitely replace me," Dewey, himself a software engineer, said. "In terms of being strategic about what code to write and how to organize the code, AI's still 30 years away from doing that, is my guess."