Saun TJ, Zuo KJ, Grantcharov TP (2019) Video technologies for recording open surgery: a systematic review. Surg Innov 26(5):599–612
Ahmadi E, Masel DT, Metcalf AY, Schuller K (2019) Inventory management of surgical supplies and sterile instruments in hospitals: a literature review. Health Syst 8(2):134–151
Patel A, Ashok A, Rao AS, Singh HN, Tripathi S (2022) Robotic assistant to surgeons for inventory handling. In: IEEE international conference on electronics, computing and communication technologies. 1–4
Rodrigues M, Mayo M, Patros P (2022) OctopusNet: machine learning for intelligent management of surgical tools. Smart Health 23:100244
Rodrigues M, Mayo M, Patros P (2022) Evaluation of deep learning techniques on a novel hierarchical surgical tool dataset. In: Australasian joint conference on artificial intelligence. 169–180
Jin A, Yeung S, Jopling J, Krause J, Azagury D, Milstein A, Fei-Fei L (2018) Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks. In: IEEE winter conference on applications of computer vision. 691–699
Zia A, Sharma Y, Bettadapura V, Sarin EL, Essa I (2018) Video and accelerometer-based motion analysis for automated surgical skills assessment. Int J Comput Assist Radiol Surg 13:443–455
Khalid S, Goldenberg M, Grantcharov T, Taati B, Rudzicz F (2020) Evaluation of deep learning models for identifying surgical actions and measuring performance. JAMA Netw Open 3:201664
McKnight RR, Pean CA, Buck JS, Hwang JS, Hsu JR, Pierrie SN (2020) Virtual reality and augmented reality —translating surgical training into surgical technique. Curr Rev Musculoskelet Med 13(6):663–674
Article PubMed PubMed Central Google Scholar
Liu D, Li Q, Jiang T, Wang Y, Miao R, Shan F, Li Z (2021) Towards unified surgical skill assessment. In: IEEE conference on computer vision and pattern recognition. 9522–9531
Yang JH, Goodman ED, Dawes AJ, Gahagan JV, Esquivel MM, Liebert CA, Kin C, Yeung S, Gurland BH (2022) Using AI and computer vision to analyze technical proficiency in robotic surgery. Surg Endosc. https://doi.org/10.1007/s00464-022-09781-y
Article PubMed PubMed Central Google Scholar
Kadkhodamohammadi A (2016) 3D detection and pose estimation of medical staff in operating rooms using RGB-D images. Dissertation, Strasbourg
Jin Y, Dou Q, Chen H, Yu L, Qin J, Fu CW, Heng PA (2017) SV-RCNet: workflow recognition from surgical videos using recurrent convolutional network. IEEE Trans Med Imag 37(5):1114–1126
Padoy N (2019) Machine and deep learning for workflow recognition during surgery. Minim Invasive Therapy Allied Technol 28(2):82–90
Doughty M, Singh K, Ghugre NR (2021) SurgeonAssist-Net: towards context-aware head-mounted display-based augmented reality for surgical guidance. In: international conference on medical image computing and computer-assisted intervention. 667–677
Kadkhodamohammadi A, Sivanesan Uthraraj N, Giataganas P, Gras G, Kerr K, Luengo I, Oussedik S, Stoyanov D (2021) Towards video-based surgical workflow understanding in open orthopaedic surgery. Comput Methods Biomech Biomed Eng Imag Visual 9(3):286–293
Navab N, Blum T, Wang L, Okur A, Wendler T (2012) First deployments of augmented reality in operating rooms. Computer 45(7):48–55
Chen X, Xu L, Wang Y, Wang H, Wang F, Zeng X, Wang Q, Egger J (2015) Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display. J Biomed Inf 55:124–131
Qian L, Deguet A, Kazanzides P (2018) ARssist: augmented reality on a head mounted display for the first assistant in robotic surgery. Healthc Technol Lett 5(5):194–200
Article PubMed PubMed Central Google Scholar
Burström G, Nachabe R, Persson O, Edström E, Terander AE (2019) Augmented and virtual reality instrument tracking for minimally invasive spine surgery: a feasibility and accuracy study. Spine 44(15):1097–1104
Elmi-Terander A, Burström G, Nachabe R, Skulason H, Pedersen K, Fagerlund M, Ståhl F, Charalampidis A, Söderman M, Holmin S, Babic D (2019) Pedicle screw placement using augmented reality surgical navigation with intraoperative 3D imaging: a first in-human prospective cohort study. Spine 44(7):517
Rodrigues P, Antunes M, Raposo C, Marques P, Fonseca F, Barreto JP (2019) Deep segmentation leverages geometric pose estimation in computer-aided total knee arthroplasty. Healthc Technol Lett 6(6):226–230
Article PubMed PubMed Central Google Scholar
Fucentese SF, Koch PP (2021) A novel augmented reality-based surgical guidance system for total knee arthroplasty. Arch Orthop Trauma Surg 141(12):2227–2233
Article PubMed PubMed Central Google Scholar
Doughty M, Ghugre NR, Wright GA (2022) Augmenting performance: a systematic review of optical see-through head-mounted displays in surgery. J Imag 8(7):203
von Atzigen M, Liebmann F, Hoch A, Spirig JM, Farshad M, Snedeker J, Fürnstahl P (2022) Marker-free surgical navigation of rod bending using a stereo neural network and augmented reality in spinal fusion. Med Image Anal 77:102365
Xu L, Zhang H, Wang J, Li A, Song S, Ren H, Qi L, Gu JJ, Meng MQ (2022) Information loss challenges in surgical navigation systems: from information fusion to AI-based approaches. Inf Fusion. https://doi.org/10.1016/j.inffus.2022.11.015
Girshick R (2015) Fast R-CNN. In: IEEE international conference on computer vision. 1440–1448
Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: IEEE conference on computer vision and pattern recognition. 3431–3440
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: IEEE conference on computer vision and pattern recognition. 770–778
Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY, Berg AC (2016) SSD: Single shot multibox detector. In: European conference on computer vision. 21–37
Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: unified, real-time object detection. In: IEEE conference on computer vision and pattern recognition. 779–788
He K, Gkioxari G, Dollár P, Girshick R (2017) Mask R-CNN. In: IEEE international conference on computer vision. 2961–2969
Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: IEEE conference on computer vision and pattern recognition. 4700–4708
Krizhevsky A, Sutskever I, Hinton GE (2017) Imagenet classification with deep convolutional neural networks. Commun ACM 60(6):84–90
Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) Swin transformer: hierarchical vision transformer using shifted windows. In: IEEE/CVF international conference on computer vision. 10012–10022
Nakawala H, Bianchi R, Pescatori LE, De Cobelli O, Ferrigno G, De Momi E (2019) Deep-onto network for surgical workflow and context recognition. Int J Comput Assist Radiol Surg 14(4):685–696
Rivoir D, Bodenstedt S, von Bechtolsheim F, Distler M, Weitz J, Speidel S (2019) Unsupervised temporal video segmentation as an auxiliary task for predicting the remaining surgery duration. In: OR 2.0 context-aware operating theaters and machine learning in clinical neuroimaging. 29–37
Shi X, Jin Y, Dou Q, Heng PA (2020) LRTD: long-range temporal dependency based active learning for surgical workflow recognition. Int J Comput Assist Radiol Surg 15(9):1573–1784
van Amsterdam B, Clarkson MJ, Stoyanov D (2021) Gesture recognition in robotic surgery: a review. IEEE Trans Biomed Eng. https://doi.org/10.1109/TBME.2021.3054828.
Xia T, Jia F (2021) Against spatial-temporal discrepancy: contrastive learning-based network for surgical workflow recognition. Int J Comput Assist Radiol Surg 16(5):839–848
Zhang D, Wang R, Lo B (2021) Surgical gesture recognition based on bidirectional multi-layer independently RNN with explainable spatial feature extraction. In: IEEE international conference on robotics and automation. 1350–1356
Mottaghi A, Sharghi A, Yeung S, Mohareri O (2022) Adaptation of surgical activity recognition models across operating rooms. In: Medical image computing and computer assisted intervention. 530–540
Valderrama N, Ruiz Puentes P, Hernández I, Ayobi N, Verlyck M, Santander J, Caicedo J, Fernández N, Arbeláez P (2022) Towards holistic surgical scene understanding. In: medical image computing and computer assisted intervention. 442–452
Zhang Y, Bano S, Page AS, Deprest J, Stoyanov D, Vasconcelos F (2022) Retrieval of surgical phase transitions using reinforcement learning. In: medical image computing and computer assisted intervention. 497–506
Jin Y, Yu Y, Chen C, Zhao Z, Heng PA, Stoyanov D (2022) Exploring intra-and inter-video relation for surgical semantic scene segmentation. IEEE Trans Med Imag 41(11):2991–3002
Müller LR, Petersen J, Yamlahi A, Wise P, Adler TJ, Seitel A, Kowalewski KF, Müller B, Kenngott H, Nickel F, Maier-Hein L (2022) Robust hand tracking for surgical telestration. Int J Comput Assist Radiol Surg 17(8):1477–1486
Article PubMed PubMed Central Google Scholar
Elfring R, de la Fuente M, Radermacher K (2010) Assessment of optical localizer accuracy for computer aided surgery systems. Comput Aid Surg 15(1–3):1–12
Picard F, Deep K, Jenny JY (2016) Current state of the art in total knee arthroplasty computer navigation. Knee Surg Sports Traumatol Arthrosc 24(11):3565–3574
Simoes R, Raposo C, Barreto JP, Edwards P, Stoyanov D (2018) Visual tracking vs optical tracking in computer-assisted intervention. IEEE Trans Biomed Eng
Herregodts S, Verhaeghe M, De Coninck B, Forward M, Verstraete MA, Victor J, De Baets P (2021) An improved method for assessing the technical accuracy of optical tracking systems for orthopaedic surgical navigation. Int J Med Robot Comput Assist Surg 17(4):e2285
Rodrigues M, Mayo M, Patros P (2022) Surgical tool datasets for machine learning research: a survey. Int J Comput Vis 130(9):2222–2248
Hein J, Seibold M, Bogo F, Farshad M, Pollefeys M, Fürnstahl P, Navab N (2021) Towards markerless surgical tool and hand pose estimation. Int J Comput Assist Radiol Surg 16(5):799–808
Article PubMed PubMed Central Google Scholar
Doughty M, Ghugre NR (2022) HMD-EgoPose: head-mounted display-based egocentric marker-less tool and hand pose estimation for augmented surgical guidance. Int J Comput Assist Radiol Surg. https://doi.org/10.1007/s11548-022-02688-y
Laina I, Rieke N, Rupprecht C, Vizcaíno JP, Eslami A, Tombari F, Navab N (2017) Concurrent segmentation and localization for tracking of surgical instruments. In: international conference on medical image computing and computer-assisted intervention. 664–672
Garcia-Peraza-Herrera LC, Li W, Fidon L, Gruijthuijsen C, Devreker A, Attilakos G, Deprest J, Vander Poorten E, Stoyanov D, Vercauteren T, Ourselin S (2017) ToolNet: holistically-nested real-time segmentation of robotic surgical tools. In: IEEE/RSJ international conference on intelligent robots and systems. 5717–5722
Aklilu J, Yeung S (2022) ALGES: active learning with gradient embeddings for semantic segmentation of laparoscopic surgical images. In: machine learning for healthcare. 182
Kurmann T, Marquez Neila P, Du X, Fua P, Stoyanov D, Wolf S, Sznitman R (2017) Simultaneous recognition and pose estimation of instruments in minimally invasive surgery. In: international conference on medical image computing and computer-assisted intervention. 505–513
Du X, Kurmann T, Chang PL, Allan M, Ourselin S, Sznitman R, Kelly JD, Stoyanov D (2018) Articulated multi-instrument 2-D pose estimation using fully convolutional networks. IEEE Trans Med Imag 37(5):1276–1287
Colleoni E, Moccia S, Du X, De Momi E, Stoyanov D (2019) Deep learning based robotic tool detection and articulation estimation with spatio-temporal layers. IEEE Robot Autom Lett 4(3):2714–2721
Kayhan M, Köpüklü O, Sarhan MH, Yigitsoy M, Eslami A, Rigoll G (2021) Deep attention based semi-supervised 2D-pose estimation for surgical instruments. In: international conference on pattern recognition. 444–460
Sarikaya D, Corso JJ, Guru KA (2017) Detection and localization of robotic tools in robot-assisted surgery videos using deep neural networks for region proposal and detection. IEEE Trans Med Imag 36(7):1542–1549
Fujii R, Hachiuma R, Kajita H, Saito H (2022) Surgical tool detection in open surgery videos. Appl Sci 12(20):10473
Comments (0)