GAFAMS, STARTUPS & INNOVATION IN HEALTHCARE by PHARMAGEEK
117.5K views | +30 today
Follow
GAFAMS, STARTUPS & INNOVATION IN HEALTHCARE by PHARMAGEEK
Your new post is loading...
Your new post is loading...
Rescooped by Lionel Reichardt / le Pharmageek from healthcare technology
Scoop.it!

3D printing technology boosts hospital efficiency and eases pressures

3D printing technology boosts hospital efficiency and eases pressures | GAFAMS, STARTUPS & INNOVATION IN HEALTHCARE by PHARMAGEEK | Scoop.it

Researchers investigating the benefits of 3D printing technology found it can deliver significant improvements to the running of hospitals.

 

The research, which compared the drawbacks and advantages of using 3D printing technology in hospitals, has been published in the International Journal of Operations and Production Management.

 

 

The study revealed that introducing such technology into hospitals could help alleviate many of the strains the UK healthcare system and healthcare systems worldwide face.

Boosting surgery success rates

- 3D printing makes it possible for surgical teams to print 3D models based on an individual patient’s surgical needs, providing more detailed and exact information for the surgeon to plan and practice the surgery, minimising the risk of error or unexpected complications.

- the use of 3D printed anatomical models was useful when communicating the details of the surgery with the patient, helping to increase their confidence in the procedure.

Speeding up patient recovery time

- significant reduction in post-surgery complications, patient recovery times and the need for subsequent hospital appointments or treatments.

Speeding up procedures

- provide surgeons with custom-built tools for each procedure, with the findings revealing that surgeries with durations of four to eight hours were reduced by 1.5 to 2.5 hours when patient-specific instruments were used.

- could also make surgeries less invasive (for example, removing less bone or tissue)

- result in less associated risks for the patient (for example, by requiring less anaesthesia).

Real-life training opportunities

- enables trainee surgeons to familiarise themselves with the steps to take in complex surgeries by practicing their skills on examples that accurately replicate real patient problems, and with greater variety.

Careful consideration required

Despite the research showing strong and clear benefits of using 3D printing, Dr Chaudhuri and his fellow researchers urge careful consideration for the financial costs.

 

3D printing is a significant financial investment for hospitals to make. In order to determine whether such an investment is worthwhile, the researchers have also developed a framework to aid hospital decision-makers in determining the return on investment for their particular institution.

 

read the study at https://www.researchgate.net/publication/344956611_Accepted_for_publication_in_International_Journal_of_Operations_and_Production_Management_Should_hospitals_invest_in_customised_on-demand_3D_printing_for_surgeries

 

read more at https://www.healtheuropa.eu/3d-printing-technology-boosts-hospital-efficiency-and-eases-pressures/108544/

 


Lire l'article complet sur : www.healtheuropa.eu


Via nrip
Ray Daugherty's curator insight, April 17, 2022 11:26 PM
Anything that can help hospitals is a good thing. Having a 3D printer is so smart as it can really help doctors and surgeons. As it said, these printers are making surgery rates more successful as the surgeon can practice before going into surgery. 3D printers are also helping with recovery time and speeding up procedures. This is going to be so beneficial moving forward because hospitals can get more people in and out and have a better chance for things to go smoothly. 
Rescooped by Lionel Reichardt / le Pharmageek from healthcare technology
Scoop.it!

Patients may not take advice from AI doctors who know their names

Patients may not take advice from AI doctors who know their names | GAFAMS, STARTUPS & INNOVATION IN HEALTHCARE by PHARMAGEEK | Scoop.it

As the use of artificial intelligence (AI) in health applications grows, health providers are looking for ways to improve patients' experience with their machine doctors.

 

Researchers from Penn State and University of California, Santa Barbara (UCSB) found that people may be less likely to take health advice from an AI doctor when the robot knows their name and medical history. On the other hand, patients want to be on a first-name basis with their human doctors.

 

When the AI doctor used the first name of the patients and referred to their medical history in the conversation, study participants were more likely to consider an AI health chatbot intrusive and also less likely to heed the AI's medical advice, the researchers added. However, they expected human doctors to differentiate them from other patients and were less likely to comply when a human doctor failed to remember their information.

 

The findings offer further evidence that machines walk a fine line in serving as doctors.

 

Machines do have advantages as medical providers, said Joseph B. Walther, distinguished professor in communication and the Mark and Susan Bertelsen Presidential Chair in Technology and Society at UCSB. He said that, like a family doctor who has treated a patient for a long time, computer systems could — hypothetically — know a patient’s complete medical history. In comparison, seeing a new doctor or a specialist who knows only your latest lab tests might be a more common experience, said Walther, who is also director of the Center for Information Technology and Society at UCSB.

 

“This struck us with the question: ‘Who really knows us better: a machine that can store all this information, or a human who has never met us before or hasn’t developed a relationship with us, and what do we value in a relationship with a medical expert?’” said Walther. “So this research asks, who knows us better — and who do we like more?”

 

Accepting AI doctors

As medical providers look for cost-effective ways to provide better care, AI medical services may provide one alternative. However, AI doctors must provide care and advice that patients are willing to accept, according to Cheng Chen, doctoral student in mass communications at Penn State.

 

“One of the reasons we conducted this study was that we read in the literature a lot of accounts of how people are reluctant to accept AI as a doctor,” said Chen. “They just don’t feel comfortable with the technology and they don’t feel that the AI recognizes their uniqueness as a patient. So, we thought that because machines can retain so much information about a person, they can provide individuation, and solve this uniqueness problem.”

 

The findings suggest that this strategy can backfire. “When an AI system recognizes a person’s uniqueness, it comes across as intrusive, echoing larger concerns with AI in society,” said Sundar.

 

In the future, the researchers expect more investigations into the roles that authenticity and the ability for machines to engage in back-and-forth questions may play in developing better rapport with patients.

 

read more at https://news.psu.edu/story/657391/2021/05/10/research/patients-may-not-take-advice-ai-doctors-who-know-their-names

 


Lire l'article complet sur : news.psu.edu


Via nrip
No comment yet.
Rescooped by Lionel Reichardt / le Pharmageek from healthcare technology
Scoop.it!

AI can now design new antibiotics in a matter of days

AI can now design new antibiotics in a matter of days | GAFAMS, STARTUPS & INNOVATION IN HEALTHCARE by PHARMAGEEK | Scoop.it

Imagine you’re a scientist who needs to discover a new antibiotic to fight off a scary disease. How would you go about finding it?

 

Typically, you’d have to test lots and lots of different molecules in the lab until you find one that has the necessary bacteria-killing properties. You might find some contenders that are good at killing the bacteria only to realize that you can’t use them because they also prove toxic to humans. It’s a very long, very expensive, and probably very aggravating process.

 

But what if, instead, you could just type into your computer the properties you’re looking for and have your computer design the perfect molecule for you?

 

That’s the general approach IBM researchers are taking, using an AI system that can automatically generate the design of molecules for new antibiotics.

 

In a new paper, published in Nature Biomedical Engineering, the researchers detail how they’ve already used it to quickly design two new antimicrobial peptides — small molecules that can kill bacteria — that are effective against a bunch of different pathogens in mice.

 

Normally, this molecule discovery process would take scientists years. The AI system did it in a matter of days.

 

That’s great news, because we urgently need faster ways to create new antibiotics.

How IBM’s AI system works

IBM’s new AI system relies on something called a generative model. To understand it at its simplest level, we can break it down into three basic steps.

 

First, the researchers start with a massive database of known peptide molecules.

 

Then the AI pulls information from the database and analyzes the patterns to figure out the relationship between molecules and their properties. It might find that when a molecule has a certain structure or composition, it tends to perform a certain function.

 

This allows it to “learn” the basic rules of molecule design.

 

Finally, researchers can tell the AI exactly what properties they want a new molecule to have. They can also input constraints (for example: low toxicity, please!). Using this info on desirable and undesirable traits, the AI then designs new molecules that satisfy the parameters. The researchers can pick the best one from among them and start testing on mice in a lab.

 

The IBM researchers claim that their approach outperformed other leading methods for designing new antimicrobial peptides by 10 percent. They found that they were able to design two new antimicrobial peptides that are highly potent against diverse pathogens, including multidrug-resistant K. pneumoniae, a bacterium known for causing infections in hospital patients. Happily, the peptides had low toxicity when tested in mice, an important signal about their safety (though not everything that’s true for mice ends up being generalizable to humans).

 

read the original unedited article at  https://www.vox.com/future-perfect/22360573/ai-ibm-design-new-antibiotics-covid-19-treatments

 

read the paper by the IBM researchers - Accelerated antimicrobial discovery via deep generative models and molecular dynamics simulations


Via nrip
nrip's curator insight, April 10, 2021 11:55 PM

This is an exciting paper to read. Using AI to identify brand-new types of antibiotics by training a neural network is not new and has been/is being explored in a number of labs around the world, Last year we read about the use of AI to predict which molecules will have bacteria-killing properties. Slowly but surely as more research builds upon more research in this space, we will be exploring using data driven personalized medicines which will be tailored to individuals rather than generalized on a best case fit.

 

But will a day ever come when we have medicines which have no side effects?

 

What do you think?

Rescooped by Lionel Reichardt / le Pharmageek from healthcare technology
Scoop.it!

Artificial intelligence could alert for focal skeleton/bone marrow uptake in Hodgkin’s lymphoma patients staged with FDG-PET/CT

Artificial intelligence could alert for focal skeleton/bone marrow uptake in Hodgkin’s lymphoma patients staged with FDG-PET/CT | GAFAMS, STARTUPS & INNOVATION IN HEALTHCARE by PHARMAGEEK | Scoop.it

Skeleton/bone marrow involvement in patients with newly diagnosed Hodgkin’s lymphoma (HL) is an important predictor of adverse outcomes1. Studies show that FDG-PET/CT upstages patients with uni- or multifocal skeleton/bone marrow uptake (BMU) when iliac crest bone marrow biopsy fails to find evidence of histology-proven involvement. The general recommendation is, therefore, that bone marrow biopsy can be avoided when FDG-PET/CT is performed at staging.

 

 

Our aim was to develop an AI-based method for the detection of focal skeleton/BMU and quantification of diffuse BMU in patients with HL undergoing staging with FDG-PET/CT. The output of the AI-based method in a separate test set was compared to the image interpretation of ten physicians from different hospitals. Finally, the AI-based quantification of diffuse BMU was compared to manual quantification.

 

Artificial intelligence-based classification

A convolutional neural network (CNN) was used to segment the skeletal anatomy11. Based on this CNN, the bone marrow was defined by excluding the edges from each individual bone; more precisely, 7 mm was excluded from the humeri and femora, 5 mm was excluded from the vertebrae and hip bones, and 3 mm was excluded from the remaining bones.

Focal skeleton/bone marrow uptake

The basic idea behind our approach is that the distribution of non-focal BMU has a light tail and most pixels will have an uptake reasonably close to the average. There will still be variations between different bones. Most importantly, we found that certain bones were much more likely to have diffuse BMU than others. Hence, we cannot use the same threshold for focal uptake in all bones. At the other end, treating each bone individually is too susceptible to noise. As a compromise, we chose to divide the bones into two groups:

  • spine”—defined as the vertebrae, sacrum, and coccyx as well as regions in the hip bones within 50 mm from these locations, i.e., including the sacroiliac joints.

  • other bones”—defined as the humeri, scapulae, clavicles, ribs, sternum, femora, and the remaining parts of the hip bones.

For each group, the focal standardized uptake values (SUVs) were quantified using the following steps:

  1. Threshold computation. A threshold (THR) was computed using the mean and standard deviation (SD) of the SUV inside the bone marrow. The threshold was set to
    =SUVmean+2SD.

 

  • 2. Abnormal bone region. The abnormal bone region was defined in the following way:

    Only the pixels segmented as bone and where SUV > THR were considered. To reduce the issues of PET/CT misalignment and spill over, a watershed transform was used to assign each of these pixels to a local maximum in the PET image. If this maximum was outside the bone mask, the uptake was assumed to be leaking into the bone from other tissues and was removed. Finally, uptake regions smaller than 0.1 mL were removed.

  • 3.Abnormal bone SUV quantification. The mean squared abnormal uptake (MSAU) was first calculated as
    MSAU=meanof(SUVTHR)2overtheabnormalboneregion.

 

To quantify the abnormal uptake, we used the total squared abnormal uptake (TSAU), rather than the more common total lesion glycolysis (TLG). We believe TLG tends to overestimate the severity of larger regions with moderate uptake. TSAU will assign a much smaller value to such lesions, reflecting the uncertainty that is often associated with their classification. Instead, TSAU will give a larger weight to small lesions with very high uptake. This reflects both the higher certainty with respect to their classification and the severity typically associated to very high uptake.
TSAU=MSAU×(volumeoftheabnormalboneregion).

This calculation leads to two TSAU values; one for the “spine” and one for the “other bones”. As the TSAU value can be nonzero even for patients without focal uptake, cut-off values were tuned using the training cohort. The AI method was adjusted in the training group to have a positive predictive value of 65% and a negative predictive value of 98%. For the “spine”, a cut-off of 0.5 was used, and for the “other bones”, a cut-off of 3.0 was used. If one of the TSAU values was higher than the corresponding cut-off, the patient was considered to have focal uptake.

 

Results

Focal uptake

Fourteen of the 48 cases were classified as having focal skeleton/BMU by the AI-based method. The majority of physicians classified 7/48 cases as positive and 41/48 cases as negative for having focal skeleton/BMU. The majority of the physicians agreed with the AI method in 39 of the 48 cases. Six of the seven positive cases (86%) identified by the majority of physicians were identified as positive by the AI method, while the seventh was classified as negative by the AI method and by three of the ten physicians.

 

Thirty-three of the 41 negative cases (80%) identified by the majority of physicians were also classified as negative by the AI method. In seven of the remaining eight patients, 1–3 physicians (out of the ten total) classified the cases as having focal uptake, while in one of the eight cases none of the physicians classified it as having focal uptake. These findings indicate that the AI method has been developed towards high sensitivity, which is necessary to highlight suspicious uptake.

 

Conclusions

The present study demonstrates that an AI-based method can be developed to highlight suspicious focal skeleton/BMU in HL patients staged with FDG-PET/CT. This AI-based method can also objectively provide results regarding high versus low BMU by calculating the SUVmedian value in the whole spine marrow and the liver. Additionally, the study also demonstrated that inter-observer agreement regarding both focal and diffuse BMU is moderate among nuclear medicine physicians with varying levels of experience working at different hospitals. Finally, our results show that the automated method regarding diffuse BMU is comparable to the manual ROI method.

 

read the original paper at https://www.nature.com/articles/s41598-021-89656-9

 


Via nrip
No comment yet.
Rescooped by Lionel Reichardt / le Pharmageek from healthcare technology
Scoop.it!

Grant awarded to develop artificial intelligence to improve stroke screening and treatment in smaller hospitals

Grant awarded to develop artificial intelligence to improve stroke screening and treatment in smaller hospitals | GAFAMS, STARTUPS & INNOVATION IN HEALTHCARE by PHARMAGEEK | Scoop.it

New artificial intelligence technology that uses a common CT angiography (CTA), as opposed to the more advanced imaging normally required to help identify patients who could benefit from endovascular stroke therapy (EST), is being developed at The University of Texas Health Science Center at Houston (UTHealth).

 

Two UTHealth researchers worked together to create a machine-learning artificial intelligence tool that could be used for assessing a stroke at every hospital that takes care of stroke patients - not just at large academic hospitals in major cities. 

 

Research to further develop and test the technology tool is funded through a five-year, $2.5 million grant from the National Institutes of Health (NIH). 

 

"The vast majority of stroke patients don't show up at large hospitals, but in those smaller regional facilities. And most of the emphasis on screening techniques is only focused on the technologies used in those large academic centers. With this technology, we are looking to change that," said Sunil Sheth, MD, assistant professor of neurology at McGovern Medical School at UTHealth.

 

Sheth set out with Luca Giancardo, PhD, assistant professor with the Center for Precision Health at UTHealth School of Biomedical Informatics, to develop a quicker way to assess patients. The result was a novel deep neural network architecture that leverages brain symmetry. Using CTAs, which are more widely available, the system can determine the presence or absence of a large vessel occlusion and whether the amount of "at-risk" tissue is above or below the thresholds seen in those patients who benefitted from EST in the clinical trials.

 

"This is the first time a data set is being specifically collected aiming to address the lack of quality imaging available for stroke patients at smaller hospitals," Giancardo said.

 

read the complete press release with further details on the work at https://www.uth.edu/news/story.htm?id=9fccdefb-ff91-4775-a759-a786689956ea

 


Via nrip
No comment yet.