|
BBL 596E/696E Seminars
BLU 596E/696E Seminars
CRN 11662/11665
CRN 14812/14818
|
| 1 |
| Title |
Credit Risk Segmentation Using Machine Learning and SHapley Additive exPlanations for Borrower Profiling |
| Speaker |
Büşra Berfim Karakurt |
| Date |
December 3rd, 2025 |
| Time |
13:30 (GMT+3) |
Abstract
Credit risk segmentation enables financial institu tions to identify borrower groups with similar risk behaviors, improving portfolio management and decision strategies. This study presents a data-driven approach to borrower profiling using machine learning and explainable AI techniques. The dataset, derived from anonymized consumer loan records, un derwent comprehensive preprocessing and feature engineering. K-Means clustering was employed to segment borrowers based on key credit risk indicators . To ensure interpretability, SHapley Additive exPlanations (SHAP) were applied to binary classifiers trained in a one-vs-rest structure for each cluster, identifying the most influential features defining segment characteristics. Results indicate that while statistical metrics such as Silhouette and Davies–Bouldin scores offer initial validation, interpretability driven evaluation yields more meaningful and actionable seg mentation. This integration of clustering and explainable models provides a practical framework for transparent borrower pro f iling, supporting risk-based pricing, collections strategies, and responsible lending practices.
|
| Location |
Informatics Institute
Room 411
|
|
| 2 |
| Title |
Two-Stage Learning Approach for Glioblastoma Cell Segmentation |
| Speaker |
Yusuf Ata |
| Date |
December 3rd, 2025 |
| Time |
13:50 (GMT+3) |
Abstract
Segmentation of glioblastoma-astrocytoma (U373) cells in microscopy images is very critical for the biological image processing area. However, it is a challenging task because of the low contrast, complex morphology, and limited expert annotations. During the same time that Gold Truth (GT) masks are highly accurate, they require expert knowledge and time. On the other hand, Silver Truth (ST) masks, which are automatically generated by computers, are useful, but they include noise and inconsistencies. This study presents a two-stage learning pipeline that includes both annotation types using the PhC-C2DH-U373 dataset from the Cell Tracking Challenge. In the first section, a MONAI-based U-Net is pre-trained with the ST masks to learn the general structure of the cell and texture. In the second section, the model, which initially uses the pretrain model weights, is fine-tuned on a limited set of GT masks to improve the inaccuracies and enhance precision. Experimental results indicate that this two-stage learning pipeline significantly improves segmentation performance, while this pipeline increases the Dice coefficient from 0.9030 (pre-training) to 0.9202 (Ensemble of 10 folds majority vote), and it reduces the dependence on manual annotations.
|
| Location |
Informatics Institute
Room 411
|
|
| 3 |
| Title |
A Three-Class Deep Learning Approach for Accurate Liver Steatosis Segmentation |
| Speaker |
Serhat Ovat |
| Date |
December 3rd, 2025 |
| Time |
14:10 (GMT+3) |
Abstract
Liver steatosis is characterized by the abnormal accumulation of fat within the liver. Accurate quantification of steatosis in liver histology images is crucial for liver trans plantation assessment. Reliable steatosis segmentation in digital whole-slide images requires accurate and precise fat segmentation that can clearly distinguish fat from other tissue structures. When all relevant tissue components are precisely identified, deep learning models can learn more informative representations and consequently produce more accurate predictions. In this study, we introduce a three-class ground-truth labeling scheme and employ a U-Net–based deep learning model to segment and predict fat regions in liver histology images. We evaluate our approach on 22 Masson’s Trichrome-stained whole-slide images (11 training and 11 test) from liver biopsy specimens, from which 200 training and 83 test image patches are extracted. We further apply a pathologist-defined scoring system to automatically assign a steatosis score to each tissue sample. The proposed model achieves an average Intersection over Union (IoU) of 77% for steatosis regions, an average F1-score of 86.66%, and a mean absolute percentage error (MAPE) of 3.73 in steatosis scoring.
|
| Location |
Informatics Institute
Room 411
|
|
|
|
| 4 |
| Title |
Data Augmentation of Microscopy Images Using Generative Artificial Intelligence Techniques: A Review and Experimental Blueprint |
| Speaker |
Beyza Nur Karakuş |
| Date |
December 10th, 2025 |
| Time |
13:30 (GMT+3) |
Abstract
In the field of medical image analysis, particularly cervical cytology, the scarcity and heterogeneity of labeled mi croscopy data present significant challenges for training reliable machine-learning models. This study maps the landscape of generative data augmentation to provide practical, data-backed guidance on this problem. The methodology first reviews the literature across four concentric layers (Pap-smear, microscopy, medical imaging, general computer vision) and compares foun dational generative models (GANs, VAEs, and diffusion). To formalize these concepts into a practical case study, the paper then proposes and executes a specific augmentation pipeline using a class-conditional denoising diffusion model (DDPM) to en hance the SIPaKMeD cervical-cell dataset. Synthetic images were combined with original samples to train a ResNet-50 classifier. The results provide a critical pipeline recommendation: com parative experiments demonstrated that while the augmentation impacted class-level performance, it slightly decreased overall accuracy (from 94.33% to 93.84%). Visual inspection revealed that synthetic artifacts highlight the need for refined quality control, underscoring the specific limitations of diffusion-based augmentation and providing practical insights for improving generalizability in medical AI models.
|
| Location |
Informatics Institute
Room 411
|
|
| 5 |
| Title |
Deep Learning Architectures for Financial Time-Series Forecasting: A Comparative Study of LSTM and Transformer Models |
| Speaker |
Ali Gölge |
| Date |
December 10th, 2025 |
| Time |
13:50 (GMT+3) |
Abstract
Financial time-series forecasting is challenging due to non-stationarity, volatility, and complex nonlinear dependen cies. In particular, short-horizon stock price prediction requires models that can capture both local temporal patterns and broader market structure. Deep learning architectures such as Long Short-Term Memory (LSTM) networks and Transformer based models have emerged as two dominant approaches for sequence modeling in this context. This paper presents a concise comparative analysis of LSTM and Transformer architectures for stock price forecasting, following the methodology of Lin (2023) [1], who evaluates both models on A-share data from the Chinese capital market. Historical prices of the Shanghai Composite Index at daily and minute-level frequencies are processed into supervised learning samples, scaled for numerical stability, and used as inputs to both architectures. The models are trained under comparable conditions and evaluated using mean absolute error (MAE) and mean squared error (MSE). The empirical f indings reported in the original study indicate that LSTM achieves lower MAE and MSE than the Transformer on both daily and minute-level data, reflecting strong short-term error minimisation [1]. However, a closer inspection shows that LSTM tends to exploit simple autocorrelation, effectively lagging the true prices rather than learning deeper relationships. In contrast, the Transformer avoids this lag behaviour and learns richer internal dependencies among price movements, suggesting higher potential for capturing the underlying structure of financial time series. Overall, the comparison highlights a trade-off between im mediate predictive accuracy and representational expressiveness, underscoring the importance of architecture choice in practical forecasting applications.
|
| Location |
Informatics Institute
Room 411
|
|
| 6 |
| Title |
Breast Cancer Imaging with Digital Breast Tomosynthesis |
| Speaker |
Hasan Taşkın |
| Date |
December 10th, 2025 |
| Time |
14:10 (GMT+3) |
Abstract
Digital Breast Tomosynthesis(DBT) is a ground breaking three-dimensional imaging technique for the diagno sis of breast cancer. Compared to traditional two-dimensional mammography, it reduces tissue superposition, allowing clearer visualization of lesions, particularly in dense breasts. This study aims to evaluate the success of DBT in diagnosing breast cancer and its impact on biopsy requirements. This literature review examines the development of DBT’s projection and back projection reconstruction algorithms and their impact on diag nostic performance. Simulation studies discuss the performance of projection and back-projection methods and experimental analyzes are presented. Iterative reconstruction algorithms have been observed to be more effective than Filtered Back-projection in preserving image quality and reducing artifacts, especially with reduced projection counts.
|
| Location |
Informatics Institute
Room 411
|
|
| 7 |
| Title |
Integration of Artificial Intelligence Technologies into Financial Markets |
| Speaker |
Nagihan Yalçın |
| Date |
December 10th, 2025 |
| Time |
14:30 (GMT+3) |
Abstract
The integration of Artificial Intelligence technologies into financial markets is reshaping traditional paradigms of trading, risk assessment, and regulatory oversight. This topic investigates the technical and operational frameworks enabling AI-driven decision systems in finance, with a focus on machine learning and deep learning architectures. These models are employed for time-series forecasting, algorithmic trading optimization, credit scoring, fraud detection and market sentiment analysis using unstructured text data. The study further examines feature engineering techniques for high-frequency financial data, model evaluation metrics and trust in automated decision pipelines. Additionally, the seminar highlights the regulatory and ethical challenges associated with AI adoption particularly data privacy, model bias, and algorithmic accountability in both global and Turkish financial ecosystems. By synthesizing empirical studies and real-world use cases, this research aims to provide a multidimensional perspective on the technical, strategic, and regulatory implications of AI integration in financial markets.
|
| Location |
Informatics Institute
Room 411
|
|
| 8 |
| Title |
Automated Recognition of Genetic Disorders via Ciliated Cell Segmentation and Motion Analysis: A Multi-Model Approach |
| Speaker |
Jeyhun Farzullayev |
| Date |
December 10th, 2025 |
| Time |
14:50 (GMT+3) |
Abstract
Cilia are microscopic, hair-like organelles present on the surface of many eukaryotic cells, playing a crucial role in fluid movement, locomotion, and sensory functions. Abnormalities in ciliary motion are strongly correlated with various genetic disorders, notably Primary Ciliary Dyskine- sia (PCD). This research presents a comprehensive pipeline for the automated identification of genetic illnesses through the segmentation and analysis of ciliary motion using high- speed video microscopy (HSVM) data. The project leverages advanced deep learning models— YOLOv8 and U-Net—applied to meticulously annotated datasets. The segmentation of cilia from complex biological Abdulkerim Chapar Computer Science Istanbul Technical University Istanbul, Turkey a.chapar@itu.edu.tr models can assist clinicians in identifying subtle pheno- typic deviations indicative of underlying genetic disorders. However, the development of such systems is fraught with domain-specific challenges, including the scarcity of annotated biomedical data, variability in video quality, and the intricate, fine-scale morphology of cilia. backgrounds is addressed through careful data preprocessing, manual annotation, and transfer learning strategies. Model performance is evaluated using standard metrics such as mean Average Precision (mAP), precision, and recall. The results demonstrate the promise of using AI for early detection of genetic disorders related to ciliary dysfunction, while also highlighting current limitations regarding data scarcity and annotation challenges. Future directions include dataset expansion and integration of motion- based feature extraction for robust clinical applications.
|
| Location |
Informatics Institute
Room 411
|
|
|
|
| 9 |
| Title |
Anomaly Detection on Drones |
| Speaker |
Elçin Can |
| Date |
December 17th, 2025 |
| Time |
13:30 (GMT+3) |
Abstract
Drones are increasingly deployed in safety-critical missions, yet factors like sensor faults, harsh environmental conditions, and remote cyber-attacks can compromise the drone’s physical stability, leading to serious risks and potential crashes in public areas. Real-time monitoring is essential to mitigate these hazards. The work DronLomaly proposes a deep learning-based tool for runtime anomaly detection using flight log data. It trains Bi-LSTM model on baseline logs from successful missions to learn the sequential patterns and correlations of normal flight states. During operation, DronLomaly continuously predicts the drone’s next state; if the actual sensor readings significantly deviate from this prediction, an anomaly is reported within milliseconds.
|
| Location |
Informatics Institute
Room 411
|
|
| 10 |
| Title |
Traffic Vehicle Density Estimation from Recorded Videos Using Object Detection |
| Speaker |
Eylül Başak Aksoy |
| Date |
December 17th, 2025 |
| Time |
13:50 (GMT+3) |
Abstract
Urban traffic congestion has become an increasingly pressing issue, particularly during peak hours when vehicle density disrupts mobility and safety. In this study, I propose a vehicle counting system based on the YOLOv11 deep learning model, applied to video data obtained from city surveillance cameras (MOBESE). The system is designed to count vehicles passing through specific regions of interest during selected time intervals throughout the day. YOLOv11’s improved architecture enables accurate detection of various vehicle types, even under challenging conditions such as partial occlusion and fluctuating lighting. Video frames are processed in real-time, and the resulting vehicle counts are recorded for further analysis. The proposed method provides a reliable and scalable approach to traffic density monitoring, offering practical benefits for smart city development and intelligent transportation systems.
|
| Location |
Informatics Institute
Room 411
|
|
| 11 |
| Title |
AI-Based Stroke Detection and Temporal Classification System Using Radiological Images (CT and MRI) |
| Speaker |
Muhammed Tuncay Aydın |
| Date |
December 17th, 2025 |
| Time |
14:10 (GMT+3) |
Abstract
This study presents an integrated and novel deep learning architecture developed for both the high-accuracy detection of ischemic stroke cases and the classification of their stages (hyperacute, subacute, normal chronic) using brain Computed Tomography (CT) and medical images. The data infrastructure of the study was established using the T.C. Ministry of Health Open Data Portal and TEKNOFEST competition data; the generalizability of the model was validated using external datasets from Cyprus Near East Hospital and ISLES 2022. Furthermore, synthetic data generation was performed using the StyleGAN2-ADA model to address class imbalance in the dataset and enhance learning capacity. Methodologically, EfficientNet (B0/B1), ConvNeXt (Base), and MaxViT (Base) architectures were utilized as the baseline; the focusing and feature extraction capabilities of these models were enhanced with original modifications. Multi-scale feature fusion was achieved by integrating the "Spatial-Channel Attention Module (SCAM)," "Localized Pooling (LP)," "Dynamic Scale Adapter (DScale)," and "Convolutional Block Attention Module (CBAM)" into the architectures. In the Stacking Ensemble Model structure developed to overcome the limitations of single models, model outputs were optimized using an XGBoost-based meta-learner and a two-stage adaptive decision mechanism. Experimental results indicate that the proposed hybrid system exhibited superior performance in detecting the presence of stroke with 0.99 Precision, 0.98 Recall, and 0.98 F1-score; whereas in the more complex problem of stage classification, it achieved values of 0.92 for F1-score, Accuracy, Precision, and Recall. This study demonstrates that combining modern deep learning architectures with multi attention mechanisms and ensemble strategies offers a reliable and precise solution for both diagnosis and staging phases in clinical decision support processes.
|
| Location |
Informatics Institute
Room 411
|
|
| 12 |
| Title |
Enhanced Voxel-based MRI Preprocessing Pipeline for Vision Transformer-Based Alzheimer’s Detection |
| Speaker |
M. Ikbal Alboushi |
| Date |
December 17th, 2025 |
| Time |
14:30 (GMT+3) |
Abstract
Alzheimer’s Disease (AD) diagnosis from structural MRI remains challenging due to subtle morphological changes and high variability in raw neuroimaging data. While deep learning models show promise for automated diagnosis, their performance is critically dependent on preprocessing quality. This study develops and validates a fully automated voxel-based preprocessing pipeline integrating FSL and ANTs tools, encom passing reorientation, rigid registration, bias field correction, and skull stripping. The pipeline’s effectiveness was evaluated using MedViT, a hybrid CNN-Transformer architecture, on 1,500 T1-weighted MRI volumes from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. A two-stage evaluation compared preprocessed versus minimally processed raw data. The pilot study (n=660) demonstrated 5.97 percentage point ac curacy improvement with preprocessing (57.69% versus 51.72%). Full-cohort evaluation (n=1,500) achieved 98% accuracy with preprocessed data, representing a 46.3 percentage point gain over raw data baseline. Ablation analysis revealed that registration provided the largest individual contribution (+2.51%), followed by bias correction (+1.88%) and skull stripping (+1.58%). The pipeline demonstrated robust generalization across 50+ acqui sition sites with diverse scanner protocols, addressing critical barriers to clinical translation. These results provide empirical evidence that systematic, automated preprocessing is essential for reliable deep learning-based AD diagnosis from structural MRI, establishing a methodological foundation for reproducible neuroimaging AI applications.
|
| Location |
Informatics Institute
Room 411
|
|
|
|
| 13 |
| Title |
Roles of Agentic AI in Computer Vision Problems - A Comprehensive Review |
| Speaker |
Mehmet Can Erdağlı |
| Date |
December 24th, 2025 |
| Time |
13:30 (GMT+3) |
Abstract
Computer vision is an integral field that bridges human visual perception and computational understanding. It can be applied to many real-world problems, with remote sensing, autonomous driving, and medical imaging being among the most prominent areas. While computer vision has been a substantial research area for decades, its main progress began with the technological rise of faster computation, making previously intractable problems much more solvable. Artificial intelligence has opened many opportunities in computer vision and is now applicable across many of its subfields. However, despite significant improvements in computationally advanced systems, there remains a critical gap regarding real-world deployment of these techniques: autonomy. This study reviews the evolution of computer vision techniques from early stages like mathematical feature extraction methods to modern deep learning approaches, such as Convolutional Neural Networks. In this direction, we examine how agentic AI, a rising concept focused on autonomous, goal-oriented, and self-capable systems, may offer solutions to the autonomy challenges in computer vision applications. Through a comprehensive literature review, this study synthesizes current research on agentic AI frameworks and their potential to address the autonomous decision-making requirements that traditional computer vision systems lack. We discuss the implications of integrating agentic capabilities into computer vision systems and identify promising directions for future research.
|
| Location |
Informatics Institute
Room 411
|
|
| 14 |
| Title |
A Systematic Review of Optimization and Simulation Methods for Electric Vehicle Charging Station Placemen |
| Speaker |
Przemyslaw Grzegorz Erbert |
| Date |
December 24th, 2025 |
| Time |
13:50 (GMT+3) |
Abstract
As electric vehicle (EV) adoption accelerates, the strategic placement of charging infrastructure becomes critical for ensuring efficiency and user confidence. This paper provides a systematic review of the literature on the EV charging station location problem, categorizing existing research based on demand modeling techniques, optimization objectives, and solution algo rithms. The analysis identifies key trends, discusses the strengths of various approaches, and highlights significant research gaps to guide future work in developing more comprehensive and realistic planning models.
|
| Location |
Informatics Institute
Room 411
|
|
| 15 |
| Title |
Semantic Intent Orchestration for High-Stakes 6G: A Proactive, Explainable Framework Beyond Service Level Agreements |
| Speaker |
Mert Ertürk |
| Date |
December 24th, 2025 |
| Time |
14:10 (GMT+3) |
Abstract
The transition to 6G represents a significant shift in telecommunications, moving beyond simple performance im provements—such as throughput and latency—to ensuring relia bility for high-stakes, safety-critical applications. The network is evolving from a basic connectivity provider into a critical almost like a nervous system infrastructure for applications such as remote telesurgery, autonomous mobility, and holographic disaster response. In these contexts, network failure is not measured in dropped packets or buffered video, but in physical harm, loss of life, or catastrophic infrastructure collapse. Current network management architectures, predicated on static Service Level Agreements (SLAs), rigid Quality of Service (QoS) classes, and document-centric Network Service Descriptors (NSDs), are fundamentally reactive. We believe that the system should benefit from additional semantic contextual understanding required to autonomously prioritize life-critical flows in the dynamic, resource-constrained environments typical of post-disaster sce narios and we propose the Semantic Intent Orchestration (SIO) framework, a proactive architecture designed for high-stakes 6G scenarios. The limitations of current systems can be illustrated by the 2023 T¨urkiye earthquake, where power outages and physical infrastructure damage rendered significant portions of the network inoperable. The SIO framework addresses these vul nerabilities by synthesizing core ideas from emerging research: a multi-stakeholder Intent-Based Networking (IBN) architecture; an AI-native Radio Access Network (AI-RAN) [1]; a Generative AI (GenAI)-based Semantic Intent Translation Engine (SITE) [2]; and a Service-Optimized Logging for Resource Allocation (SOL RA) mechanism supported by a novel Semantic Conflict Resolu tion (SCORE) utility function [3]. Furthermore, the framework bridges the critical “interoperability gap” between Agentic AI and legacy infrastructure through the integration of the Model Context Protocol (MCP).
|
| Location |
Informatics Institute
Room 411
|
|
| 16 |
| Title |
Benchmarking ML-KEM on RISC-V IoT Microcontrollers |
| Speaker |
Metehan Arslan |
| Date |
December 24th, 2025 |
| Time |
14:30 (GMT+3) |
Abstract
Post-Quantum Cryptography (PQC) is rapidly be coming a prerequisite for secure Internet of Things (IoT) com munication. This study presents a comprehensive performance evaluation of the NIST-standardized ML-KEM (Kyber) algorithm on the Espressif ESP32-C3, a cost-effective RISC-V microcon troller. We implemented a reconfigurable Proof of Concept (PoC) based on the reference C implementation, integrated with the device’s hardware Random Number Generator (TRNG). We evaluate all three security levels—Kyber-512, Kyber-768, and Kyber-1024—analyzing the trade-offs between execution latency and memory footprint. Results indicate that while Kyber-512 offers a feasible handshake time of ≈2.4s, the higher-security Kyber-1024 demands nearly 6s and 18 KB of stack, pushing the limits of standard IoT configurations. These findings establish a performance baseline for unoptimized RISC-V PQC and highlight the critical need for assembly-level optimization.
|
| Location |
Informatics Institute
Room 411
|
|
|
|
| 17 |
| Title |
SWIPT IoT and Fog-enabled network with Byte Aware LC-WRR Scheduling |
| Speaker |
İrfan Arabacı |
| Date |
December 31st, 2025 |
| Time |
13:30 (GMT+3) |
Abstract
In the global networking sector, load allocation and scheduling have changed rapidly with critical improvements in technology. The transformation of networking shifted to the simultaneous transmission of energy and data. The understanding of energy transmission has empowered battery-life-limited technologies such as the Internet of Things (IoT). Simultaneous Wireless Information and Power Transfer (SWIPT) technology has joined the wireless world to solve battery and data transfer problems. The handling of Data Transmission (DT) in IoT networking needs a vast number of Fog nodes that work with SWIPT technology. This paper proposes a novel approach where Fog Nodes employ the Least Connections Weighted Round Robin (LC-WRR) Scheduler to optimize the processing queue. We conduct a rigorous comparative analysis of the LC-WRR algorithm against traditional scheduling alternatives, including the widely referenced Weighted Round Robin (WRR) policy, with the aim of demonstrating substantial improvements in system efficiency, throughput, and long-term network stability.
|
| Location |
Informatics Institute
Room 411
|
|
| 18 |
| Title |
Interpretable AI for Predicting Neoadjuvant Chemotherapy Response in Breast Cancer |
| Speaker |
Sümeyye Zülal Dik |
| Date |
December 31st, 2025 |
| Time |
13:50 (GMT+3) |
Abstract
Neoadjuvant chemotherapy (NAC) plays a critical role in the management of breast cancer; however, its clinical effectiveness varies widely across patients, and pathological response can only be assessed postoperatively. To address this limitation, Zhou et al. developed an interpretable artificial intelligence (AI) framework capable of predicting NAC response preoperatively using multimodal data derived from pre-treatment biopsy samples. The study analyzed clinical and histopathological data from 709 breast cancer patients, supplemented with TCGA multi-omics datasets and an external validation cohort. Pathology features were extracted using the self-supervised UNI model, and treatment-response prediction was performed with the attention based CLAM multiple-instance learning architecture. Across three Residual Cancer Burden (RCB) classification subtasks, the model achieved its strongest and most stable performance in distinguishing RCB 0–1 (NAC-sensitive) from RCB 2–3 (NAC-resistant), reaching an AUC of 0.901 in training and 0.819 in external validation. These findings suggest that interpretable pathology-based AI systems can provide a feasi ble and explainable approach for preoperative NAC response prediction. Although external validation supports the model’s generalizability, broader multi-center studies are required before clinical implementation.
|
| Location |
Informatics Institute
Room 411
|
|
| 19 |
| Title |
Methodologies for Performance Evaluation in Rule Based Genomic Variant Classification: A Comparative Analysis |
| Speaker |
Amine Meryem Aydın |
| Date |
December 31st, 2025 |
| Time |
14:10 (GMT+3) |
Abstract
"Next-Generation Sequencing technologies, including Whole-Genome Sequencing (WGS), Whole-Exome Sequencing(WES), and Targeted Gene Panels, generate a volume of genomic data that is to large to analyze efficiently. These technologies, detect genetic variants to approximately 5 million variants per individual for WGS and 60,000 variants per individual for WES, bring a considerable challenge in interpreting the detected variant. The lack of a uniform scheme for variant classification in the early days meant that interpretations had been variable and, in many cases, incorrect. This identified a critical requirement for uniform criteria that would differentiate variants based on their clinical importance appropriately and consistently. In 2015, the American College of Medical Genetics and Genomics with the Association for Molecular Pathology published a guideline based on 28 criteria defined according to data from population data, computational and predictive data, and others. This framework became widely accepted as standard for variant classification. The manual application of these guidelines, though, introduced new challenges: a process not only too labor-intensive but also prone to human error, subjective interpretation, and variable weighting of the criteria may still lead to conflicts in classifications between independent analysts and laboratories. There has been a shift toward automation as a way of overcoming these limitations and ensuring more consistency. Several computational tools have been developed to enable the application of the ACMG/AMP criteria in a more objective, reproducible, and efficient fashion.
This paper, therefore, will discuss the standard methodologies of evaluating such automated tools, which include benchmarking against a collection of expert-curated datasets for key performance metrics: accuracy, sensitivity, speed, among others. A comparative study will then illustrate how different algorithmic approaches directly impact performance and efficiency. Another final important point to discuss is that the tools being used for automated variant classification should focus on two very important factors: first, strong algorithmic design, and second, to ensure that the rule sets these algorithms depend upon are current and up-to-date."
|
| Location |
Informatics Institute
Room 411
|
|