File | Description | Size | Format | |
---|---|---|---|---|
477.64 kB | Adobe PDF | |
Items in eGyanKosh are protected by copyright, with all rights reserved, unless otherwise indicated.
You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.
All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.
Original Submission Date Received: .
Find support for a specific problem in the support section of our website.
Please let us know what you think of our products and services.
Visit our dedicated information section to learn more about MDPI.
A new critical plane multiaxial fatigue criterion with an exponent to account for high mean stress effect.
2. materials and methods, 2.1. critical plane methods, 2.2. new critical plane method, 2.3. experimental database and numerical implementation.
Reference | Number of Data Items | Material | Reason for the Selection |
---|---|---|---|
Papuga [ ] | 15 | Various | Mean compressive loads applied |
Sauer [ ] | 8 | 14S-T aluminium | Static bending or static torsion stresses |
O’Connor [ ], Chodorowski [ ] | 10 | NiCrMo steel | High values of mean axial tension and compressive loads |
Ukrainetz [ ] | 6 | 0.1 C steel | Mean axial tension loads. Static shear stresses on torsional fatigue loading |
Grün et al. [ ] | 5 | 25CrMo4 steel | High values of mean axial tension and compression loads |
Lüpfert et al. [ ] | 8 | 20MnCr steel | High values of static compressive loads. Only biaxial loadings considered. |
Rausch [ ] | 14 | EN-GJV-450 cast iron | High values of mean axial tension and compression loads. Static shear stresses on torsional fatigue loading |
Tovo [ ] | 6 | EN-GJS-400-18 ductile cast iron | Mean axial tensile and compressive loads. Static shear stresses on torsional fatigue loading |
Pallarés-Santasmartas [ , ] | 6 | 34CrMo6 steel | Mean axial tensile and compressive loads. Static shear stresses on torsional fatigue loading |
3.1. general results for the complete database, 3.2. results for uniaxial load cases, 3.3. results for load cases with no mean stress, 3.4. results for load cases with mean stress, 3.5. results for torsional load cases, 4. conclusions, author contributions, data availability statement, conflicts of interest, nomenclature.
a, b, c, d | parameters of the critical plane criteria |
κ | fatigue limit ratio κ = σ /τ |
σ | fatigue limit in repeated axial loading |
σ | fully reversed axial fatigue limit |
σ | ultimate tensile strength |
σ | normal stress amplitude on the critical plane |
σ | mean normal stress on the critical plane |
σ | equivalent normal stress amplitude |
τ | fully reversed torsional fatigue limit |
τ | alternating shear stress on the critical plane |
R | load ratio |
Click here to enlarge figure
Appendix a.2. repeated axial loading fatigue test.
Criterion | Equation (No Failure Condition) | Parameter Values |
---|---|---|
Findley (F) [ ] | ||
Robert (R) [ ] | ||
Papuga (P) [ ] | | |
| ||
Abasolo (A) * | ||
for | ||
Complete Database: 485 Experiments | Findley | Robert | Abasolo | Papuga |
---|---|---|---|---|
Mean value of the error | 8.4 | 3.5 | 3.9 | −0.6 |
Standard deviation | 17.6 | 10.3 | 9.7 | 7.4 |
Maximum value of the error | 147.0 | 77.3 | 43.9 | 42.6 |
Minimum value of the error | −38.7 | −36.4 | −32.1 | −36.4 |
Range of the error | 185.7 | 113.7 | 76.0 | 78.9 |
Mean absolute value of the error | 12.4 | 7.0 | 7.2 | 5.2 |
“Accurate” results (error range ± 5%) | 40.5 | 49.6 | 49.6 | 63.4 |
“Acceptable” results (error range ± 15%) | 70.0 | 87.4 | 86.0 | 94.4 |
Conservative results (error range + 5% + 40%) | 41.6 | 36.0 | 36.8 | 17.1 |
Non-conservative results (error range − 5% − 40%) | 12.6 | 13.8 | 13.4 | 19.3 |
Pure Axial Cases: 76 Experiments | Goodman | Gerber | Marin | Findley | Robert | Abasolo | Papuga |
---|---|---|---|---|---|---|---|
Mean value of the error | 50.3 | 9.5 | −8.4 | 8.3 | 4.1 | 2.6 | −2.6 |
Standard deviation | 109.4 | 52.0 | 23.2 | 28.0 | 16.6 | 10.4 | 10.7 |
Maximum value of the error | 596.0 | 268.4 | 68.3 | 147.0 | 77.3 | 38.9 | 42.6 |
Minimum value of the error | −39.1 | −52.9 | −57.5 | −38.7 | −27.4 | −25.9 | −36.4 |
Range of the error | 635.2 | 321.3 | 125.9 | 185.7 | 104.6 | 64.7 | 78.9 |
Mean absolute value of the error | 58.9 | 25.5 | 17.6 | 19.0 | 8.8 | 6.6 | 7.7 |
“Accurate” results (error range ± 5%) | 5.3 | 26.3 | 21.1 | 30.3 | 61.8 | 56.6 | 46.1 |
“Acceptable” results (error range ± 15%) | 26.3 | 59.2 | 67.1 | 50.0 | 84.2 | 86.8 | 89.5 |
Conservative results (error range + 5% + 40%) | 42.1 | 28.9 | 13.2 | 31.6 | 22.4 | 31.6 | 17.1 |
Non-conservative results (error range − 5% − 40%) | 21.1 | 34.2 | 50.0 | 26.3 | 11.8 | 11.8 | 35.5 |
Pure Axial Cases with High Mean Stress (0.05 ≤ R < 1): 35 Experiments | Goodman | Gerber | Marin | Findley | Robert | Abasolo | Papuga |
---|---|---|---|---|---|---|---|
Mean value of the error | 94.8 | 18.2 | −14.9 | 18.4 | 10.8 | 8.0 | −2.0 |
Standard deviation | 147.6 | 73.5 | 27.7 | 35.3 | 21.4 | 11.2 | 14.5 |
Maximum value of the error | 596.0 | 268.4 | 68.3 | 147.0 | 77.3 | 38.9 | 42.6 |
Minimum value of the error | −32.6 | −52.9 | −57.5 | −38.7 | −14.9 | −9.7 | −36.4 |
Range of the error | 628.6 | 321.3 | 125.9 | 185.7 | 92.1 | 48.6 | 78.9 |
Mean absolute value of the error | 100.4 | 41.2 | 24.9 | 27.3 | 14.0 | 9.4 | 10.8 |
“Accurate” results (error range ± 5%) | 2.9 | 11.4 | 11.4 | 25.7 | 45.7 | 37.1 | 25.7 |
“Acceptable” results (error range ± 15%) | 20.0 | 40.0 | 48.6 | 34.3 | 74.3 | 80.0 | 80.0 |
Conservative results (error range + 5% + 40%) | 22.9 | 31.4 | 5.7 | 28.6 | 37.1 | 57.1 | 25.7 |
Non-conservative results (error range − 5% − 40%) | 14.3 | 37.1 | 51.4 | 20.0 | 8.6 | 5.7 | 45.7 |
No Mean Stress Cases: 172 Experiments | Findley | Robert | Abasolo | Papuga |
---|---|---|---|---|
Mean value of the error | 1.6 | 1.6 | 1.6 | 1.1 |
Standard deviation | 5.0 | 5.0 | 5.0 | 3.7 |
Maximum value of the error | 16.1 | 16.1 | 16.1 | 12.4 |
Minimum value of the error | −12.4 | −12.4 | −12.4 | −9.7 |
Range of the error | 28.4 | 28.4 | 28.4 | 22.1 |
Mean absolute value of the error | 4.1 | 4.1 | 4.1 | 3.1 |
“Accurate” results (error range ± 5%) | 70.5 | 70.5 | 70.5 | 81.5 |
“Acceptable” results (error range ± 15%) | 98.8 | 98.8 | 98.8 | 100.0 |
Conservative results (error range + 5% + 40%) | 21.4 | 21.4 | 21.4 | 13.9 |
Non-conservative results (error range − 5% − 40%) | 8.1 | 8.1 | 8.1 | 4.6 |
Mean Stress Cases: 313 Experiments | Findley | Robert | Abasolo | Papuga |
---|---|---|---|---|
Mean value of the error | 12.1 | 4.6 | 5.1 | −1.5 |
Standard deviation | 20.7 | 12.1 | 11.4 | 8.7 |
Maximum value of the error | 147.0 | 77.3 | 43.9 | 42.6 |
Minimum value of the error | −38.7 | −36.4 | −32.1 | −36.4 |
Range of the error | 185.7 | 113.7 | 76.0 | 78.9 |
Mean absolute value of the error | 17.3 | 9.0 | 9.3 | 6.5 |
“Accurate” results (error range ± 5%) | 24.0 | 38.0 | 38.0 | 53.4 |
“Acceptable” results (error range ± 15%) | 54.0 | 81.2 | 78.9 | 91.4 |
Conservative results (error range + 5% + 40%) | 52.7 | 44.1 | 45.4 | 18.8 |
Non-conservative results (error range − 5% − 40%) | 15.0 | 16.9 | 16.3 | 27.5 |
High Mean Stress Cases (0.05 ≤ R < 1): 110 EXPERIMENTS | Findley | Robert | Abasolo | Papuga |
---|---|---|---|---|
Mean value of the error | 19.5 | 6.5 | 8.2 | −2.3 |
Standard deviation | 27.9 | 15.7 | 13.2 | 11.7 |
Maximum value of the error | 147.0 | 77.3 | 43.9 | 42.6 |
Minimum value of the error | −38.7 | −18.0 | −16.6 | −36.4 |
Range of the error | 185.7 | 95.2 | 60.5 | 78.9 |
Mean absolute value of the error | 25.8 | 11.7 | 12.1 | 9.0 |
“Accurate” results (error range ± 5%) | 17.3 | 28.2 | 27.3 | 37.3 |
“Acceptable” results (error range ± 15%) | 35.5 | 74.5 | 69.1 | 82.7 |
Conservative results (error range + 5% + 40%) | 43.6 | 47.3 | 55.5 | 24.5 |
Non-conservative results (error range − 5% − 40%) | 17.3 | 21.8 | 16.4 | 37.3 |
Mean Stress Cases, Pure Torsion: 33 Experiments | Findley | Robert | Abasolo | Papuga |
---|---|---|---|---|
Mean value of the error | −0.7 | −4.7 | −5.4 | −7.5 |
Standard deviation | 11.0 | 6.3 | 5.6 | 8.3 |
Maximum value of the error | 29.1 | 5.8 | 5.4 | 7.7 |
Minimum value of the error | −31.0 | −18.0 | −16.6 | −31.3 |
Range of the error | 60.1 | 23.7 | 22.0 | 39.0 |
Mean absolute value of the error | 7.4 | 5.9 | 5.9 | 8.1 |
“Accurate” results (error range ± 5%) | 54.5 | 51.5 | 45.5 | 45.5 |
“Acceptable” results (error range ± 15%) | 87.9 | 90.9 | 93.9 | 84.8 |
Conservative results (error range + 5% + 40%) | 24.2 | 3.0 | 3.0 | 3.0 |
Non-conservative results (error range − 5% − 40%) | 21.2 | 45.5 | 51.5 | 51.5 |
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
Abasolo, M.; Pallares-Santasmartas, L.; Eizmendi, M. A New Critical Plane Multiaxial Fatigue Criterion with an Exponent to Account for High Mean Stress Effect. Metals 2024 , 14 , 964. https://doi.org/10.3390/met14090964
Abasolo M, Pallares-Santasmartas L, Eizmendi M. A New Critical Plane Multiaxial Fatigue Criterion with an Exponent to Account for High Mean Stress Effect. Metals . 2024; 14(9):964. https://doi.org/10.3390/met14090964
Abasolo, Mikel, Luis Pallares-Santasmartas, and Martin Eizmendi. 2024. "A New Critical Plane Multiaxial Fatigue Criterion with an Exponent to Account for High Mean Stress Effect" Metals 14, no. 9: 964. https://doi.org/10.3390/met14090964
Further information, mdpi initiatives, follow mdpi.
Subscribe to receive issue release notifications and newsletters from MDPI journals
Single object tracking is a vital task of many applications in critical fields. However, it is still considered one of the most challenging vision tasks. In recent years, computer vision, especially object tracking, witnessed the introduction or adoption of many novel techniques, setting new fronts for performance. In this survey, we visit some of the cutting-edge techniques in vision, such as Sequence Models, Generative Models, Self-supervised Learning, Unsupervised Learning, Reinforcement Learning, Meta-Learning, Continual Learning, and Domain Adaptation, focusing on their application in single object tracking. We propose a novel categorization of single object tracking methods based on novel techniques and trends. Also, we conduct a comparative analysis of the performance reported by the methods presented on popular tracking benchmarks. Moreover, we analyze the pros and cons of the presented approaches and present a guide for non-traditional techniques in single object tracking. Finally, we suggest potential avenues for future research in single-object tracking.
This is a preview of subscription content, log in via an institution to check access.
Subscribe and save.
Price includes VAT (Russian Federation)
Instant access to the full article PDF.
Rent this article via DeepDyve
Institutional subscriptions
Not applicable.
Code availability.
Yang Y, Wu Y, Chen N (2019) Explorations on visual localization from active to passive. Multimedia Tools Appl 78(2):2269–2309
Article Google Scholar
Mathur G, Somwanshi D, Bundele MM (2018) Intelligent video surveillance based on object tracking. In: 2018 3rd international conference and workshops on rcent advances and innovations in engineering (ICRAIE), pp. 1–6. https://doi.org/10.1109/ICRAIE.2018.8710421
Cao J, Song C, Song S, Xiao F, Zhang X, Liu Z, Ang MH Jr (2021) Robust object tracking algorithm for autonomous vehicles in complex scenes. Remote Sens 13(16):3234
Zheng Z, Zhang X, Qin L, Yue S, Zeng P (2023) Cows’ legs tracking and lameness detection in dairy cattle using video analysis and siamese neural networks. Comput Electron Agricult 205:107618
Chen K, Oldja R, Smolyanskiy N, Birchfield S, Popov A, Wehr D, Eden I, Pehserl J (2020) Mvlidarnet: Real-time multi-class scene understanding for autonomous driving using multiple views. In: 2020 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 2288–2294. IEEE
Yilmaz A, Javed O, Shah M (2006) Object tracking: a survey. ACM Comput Surv 38(4):13. https://doi.org/10.1145/1177352.1177355
Smeulders AWM, Chu DM, Cucchiara R, Calderara S, Dehghan A, Shah M (2014) Visual tracking: an experimental survey. IEEE Trans Pattern Anal Mach Intell 36(7):1442–1468. https://doi.org/10.1109/TPAMI.2013.230
Javed S, Danelljan M, Khan F, Khan M, Felsberg M, Matas J (2023) Visual object tracking with discriminative filters and siamese networks: a survey and outlook. IEEE Trans Pattern Anal Mach Intell 45(05):6552–6574. https://doi.org/10.1109/TPAMI.2022.3212594
Kugarajeevan J, Kokul T, Ramanan A, Fernando, S (2023) Transformers in single object tracking: an experimental survey. IEEE Access
Zhang Y, Wang T, Liu K, Zhang B, Chen L (2021) Recent advances of single-object tracking methods: a brief survey. Neurocomputing 455, 1–11 https://doi.org/10.1016/j.neucom.2021.05.011
Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, Uszkoreit J, Houlsby N (2021) An image is worth 16x16 words: Transformers for image recognition at scale. In: International conference on learning representations. https://openreview.net/forum?id=YicbFdNTTy
Wei X, Bai Y, Zheng Y, Shi D, Gong Y (2023) Autoregressive visual tracking. In: 2023 IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 9697–9706. IEEE Computer Society, Los Alamitos, CA, USA. https://doi.org/10.1109/CVPR52729.2023.00935
Chen X, Peng H, Wang D, Lu H, Hu H (2023) Seqtrack: Sequence to sequence learning for visual object tracking. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 14572–14581
Zhang H, Liang J, Zhang J, Zhang T, Lin Y, Wang Y (2023) Attention-driven memory network for online visual tracking. IEEE Transactions on Neural Networks and Learning Systems, 1–14 https://doi.org/10.1109/TNNLS.2023.3299412
Zhao X, Liu Y, Han G (2021) Cooperative use of recurrent neural network and siamese region proposal network for robust visual tracking. IEEE Access 9, 57704–57715 https://doi.org/10.1109/ACCESS.2021.3072778
Gao J, Zhang T, Xu C (2019) Graph convolutional tracking. In: 2019 IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 4644–4654 https://doi.org/10.1109/CVPR.2019.00478
Wang Z, Zhou Z, Chen F, Xu J, Pei W, Lu G (2023) Robust tracking via fully exploring background prior knowledge. IEEE Transactions on Circuits and Systems for Video Technology, 1–1 https://doi.org/10.1109/TCSVT.2023.3323702
Yang T, Chan AB (2017) Recurrent filter learning for visual tracking. In: 2017 IEEE International conference on computer vision workshops (ICCVW), pp. 2010–2019. https://doi.org/10.1109/ICCVW.2017.235
Zhao H, Wang D, Lu H (2023) Representation learning for visual object tracking by masked appearance transfer. In: 2023 IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 18696–18705. IEEE Computer Society, Los Alamitos, CA, USA. https://doi.org/10.1109/CVPR52729.2023.01793
Wu Q, Yang T, Liu Z, Wu B, Shan Y, Chan AB (2023) Dropmae: Masked autoencoders with spatial-attention dropout for tracking tasks. In: 2023 IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 14561–14571. IEEE Computer Society, Los Alamitos, CA, USA. https://doi.org/10.1109/CVPR52729.2023.01399
Guo J, Xu T, Jiang S, Shen Z (2018) Generating reliable online adaptive templates for visual tracking. In: 2018 25th IEEE international conference on image processing (ICIP), pp. 226–230. https://doi.org/10.1109/ICIP.2018.8451440
Song Y, Ma C, Wu X, Gong L, Bao L, Zuo W, Shen C, Lau R, Yang M-H (2018) Vital: visual tracking via adversarial learning, pp. 8990–8999. https://doi.org/10.1109/CVPR.2018.00937
Yao B, Li J, Xue S, Wu J, Guan H, Chang J, Ding Z (2022) Garat: Generative adversarial learning for robust and accurate tracking. Neural Netw 148:206–218. https://doi.org/10.1016/j.neunet.2022.01.010
Yin Y, Xu D, Wang X, Zhang L (2020) Adversarial feature sampling learning for efficient visual tracking. IEEE Trans Auto Sci Eng 17(2):847–857. https://doi.org/10.1109/TASE.2019.2948402
Zhang J, Zhang Y (2023) Siamese network for object tracking with diffusion model. ICDIP ’23. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3604078.3604132
Wang X, Li C, Luo B, Tang J (2018) Sint++: Robust visual tracking via adversarial positive instance generation. In: 2018 IEEE/CVF conference on computer vision and pattern recognition. https://doi.org/10.1109/CVPR.2018.00511
Kwon J (2020) Robust visual tracking based on variational auto-encoding markov chain monte carlo. Inform Sci 512, 1308–1323 https://doi.org/10.1016/j.ins.2019.09.015
Zhu W, Xu L, Meng J (2023) Consistency-based self-supervised visual tracking by using query-communication transformer. Knowl-Based Syst 278:110849. https://doi.org/10.1016/j.knosys.2023.110849
Li X, Liu S, De Mellow S, Wang X, Kautz J, Yang M-H (2019) Joint-task self-supervised learning for temporal correspondence. In: NeurIPS
Zhu W, Wang Z, Xu L, Meng J (2022) Exploiting temporal coherence for self-supervised visual tracking by using vision transformer. Knowl-Based Syst 251:109318. https://doi.org/10.1016/j.knosys.2022.109318
Li X, Pei W, Wang Y, He Z, Lu H, Yang M-H (2022) Self-supervised tracking via target-aware data synthesis. IEEE transactions on neural networks and learning systems, 1–12 https://doi.org/10.1109/TNNLS.2022.3231537
Yuan W, Wang M, Chen Q (2020) Self-supervised object tracking with cycle-consistent siamese networks, pp. 10351–10358. https://doi.org/10.1109/IROS45743.2020.9341621
Wang Z, Zhao H, Li Y-L, Wang S, Torr P, Bertinetto L (2021) Do different tracking tasks require different appearance models? In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 726–738. Curran Associates, Inc.
Wu Q, Wan J, Chan AB (2021) Progressive unsupervised learning for visual object tracking. In: 2021 IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 2992–3001. https://doi.org/10.1109/CVPR46437.2021.00301
Wang N, Song Y, Ma C, Zhou W, Liu W, Li H (2019) Unsupervised deep tracking. In: 2019 IEEE/CVF Conference on computer vision and pattern recognition (CVPR), pp. 1308–1317. IEEE Computer Society, Los Alamitos, CA, USA. https://doi.org/10.1109/CVPR.2019.00140
Park E, Berg AC (2018) Meta-tracker: Fast and robust online adaptation for visual object trackers. In: Proceedings of the European Conference on Computer Vision (ECCV)
Li P, Chen B, Ouyang W, Wang D, Yang X, Lu H (2019) Gradnet: Gradient-guided network for visual object tracking. In: Proceedings of the IEEE/CVF international conference on computer vision (ICCV)
Bhat G, Danelljan M, Gool LV, Timofte R (2019) Learning discriminative model prediction for tracking. In: Proceedings of the IEEE/CVF international conference on computer vision (ICCV)
Dong X, Shen J, Shao L, Porikli F (2020) Clnet: A compact latent network for fast adjusting siamese trackers. In: European conference on computer vision, pp. 378–395. Springer
Wu Q, Chan AB (2021) Meta-graph adaptation for visual object tracking. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. https://doi.org/10.1109/ICME51207.2021.9428441
Paul M, Danelljan M, Mayer C, Van Gool L (2022) Robust visual tracking by segmentation. In: European conference on computer vision, pp. 571–588. Springer
Zhang H, Zhu M, Zhang J, Zhuo L (2019) Long-term visual object tracking via continual learning. IEEE Access 7, 182548–182558 https://doi.org/10.1109/ACCESS.2019.2960321
Choi J, Baik S, Choi M, Kwon J, Lee KM (2022) Visual tracking by adaptive continual meta-learning. IEEE Access 10, 9022–9035 https://doi.org/10.1109/ACCESS.2022.3143809
Li H, Wang X, Shen F, Li Y, Porikli F, Wang M (2019) Real-time deep tracking via corrective domain adaptation. IEEE Trans Circ Syst Video Technol 29(9):2600–2612. https://doi.org/10.1109/TCSVT.2019.2923639
Nam H, Han B (2016) Learning multi-domain convolutional neural networks for visual tracking. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp. 4293–4302. IEEE Computer Society, Los Alamitos, CA, USA. https://doi.org/10.1109/CVPR.2016.465
Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV (2019) XLNet: Generalized Autoregressive Pretraining for Language Understanding. Curran Associates Inc., Red Hook, NY, USA
Google Scholar
Bertinetto L, Valmadre J, Henriques JF, Vedaldi A, Torr PHS (2016) Fully-convolutional siamese networks for object tracking. In: Hua G, Jégou H (eds) Computer Vision—ECCV 2016 Workshops. Springer, Cham, pp 850–865
Chapter Google Scholar
Ye B, Chang H, Ma B, Shan S, Chen X (2022) Joint feature learning and relation modeling for tracking: A one-stream framework. In: Avidan S, Brostow G, Cissé M, Farinella GM, Hassner T (eds) Computer Vision - ECCV 2022. Springer, Cham, pp 341–357
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Lu, Polosukhin I (2017) Attention is all you need. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 30. Curran Associates, Inc
SHI X, Chen Z, Wang H, Yeung D-Y, Wong W-k, WOO W-c (2015) Convolutional lstm network: A machine learning approach for precipitation nowcasting. In: Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 28. Curran Associates, Inc.
Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324. https://doi.org/10.1109/5.726791
Kipf TN, Welling M (2017) Semi-supervised classification with graph convolutional networks. In: International conference on learning representations. https://openreview.net/forum?id=SJU4ayYgl
Li B, Yan J, Wu W, Zhu Z, Hu X (2018) High performance visual tracking with siamese region proposal network. In: 2018 IEEE/CVF conference on computer vision and pattern recognition, pp. 8971–8980. https://doi.org/10.1109/CVPR.2018.00935
Ren S, He K, Girshick R, Sun J (2015) Faster r-cnn: Towards real-time object detection with region proposal networks. In: Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 28. Curran Associates, Inc
He K, Gkioxari G, Dollar P, Girshick R (2017) Mask r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV)
Kirillov A, Wu Y, He K, Girshick R (2020) Pointrend: Image segmentation as rendering. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR)
Sun K, Xiao B, Liu D, Wang J (2019) Deep high-resolution representation learning for human pose estimation. In: 2019 IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 5686–5696. https://doi.org/10.1109/CVPR.2019.00584
Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: International conference on learning representations
Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9, 1735–80 https://doi.org/10.1162/neco.1997.9.8.1735
Zhang H, Zhang J, Nie G, Hu J, Zhang WJC (2022) Residual memory inference network for regression tracking with weighted gradient harmonized loss. Inform Sci 597, 105–124 https://doi.org/10.1016/j.ins.2022.03.047
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)
Kenton JDM-WC, Toutanova LK (2019) Bert: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of naacL-HLT, vol. 1, p. 2
Lewis M, Liu Y, Goyal N, Ghazvininejad M, Mohamed A, Levy O, Stoyanov V, Zettlemoyer L (2020) Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th annual meeting of the association for computational linguistics, pp. 7871–7880
Kitaev N, Kaiser L, Levskaya A (2020) Reformer: The efficient transformer. In: International conference on learning representations. https://openreview.net/forum?id=rkgNKkHtvB
He K, Chen X, Xie S, Li Y, Dollár P, Girshick R (2022) Masked autoencoders are scalable vision learners. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 16000–16009
Wang X, Zhao K, Zhang R, Ding S, Wang Y, Shen W (2022) Contrastmask: Contrastive learning to segment every thing. In: 2022 IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 11594–11603. IEEE Computer Society, Los Alamitos, CA, USA. https://doi.org/10.1109/CVPR52688.2022.01131
Chang H, Zhang H, Jiang L, Liu C, Freeman WT (2022) Maskgit: Masked generative image transformer. In: 2022 IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 11305–11315. IEEE Computer Society, Los Alamitos, CA, USA. https://doi.org/10.1109/CVPR52688.2022.01103
Law H, Deng J (2018) Cornernet: Detecting objects as paired keypoints. In: Proceedings of the European conference on computer vision (ECCV), pp. 734–750
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial networks. Adv Neural Inform Process Syst 3 https://doi.org/10.1145/3422622
Liu S, Wang T, Bau D, Zhu J-Y, Torralba A (2020) Diverse image generation via self-conditioned gans. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR)
Wang Y, Wu C, Herranz L, Weijer J, Gonzalez-Garcia A, Raducanu B (2018) Transferring gans: generating images from limited data. In: Proceedings of the European conference on computer vision (ECCV)
Han C, Hayashi H, Rundo L, Araki R, Shimoda W, Muramatsu S, Furukawa Y, Mauri G, Nakayama H (2018) Gan-based synthetic brain mr image generation. In: 2018 IEEE 15th International symposium on biomedical imaging (ISBI 2018), pp. 734–738. https://doi.org/10.1109/ISBI.2018.8363678
Mustikovela SK, De Mello S, Prakash A, Iqbal U, Liu S, Nguyen-Phuoc T, Rother C, Kautz J (2021) Self-supervised object detection via generative image synthesis. In: Proceedings of the IEEE/CVF International conference on computer vision (ICCV), pp. 8609–8618
Li J, Liang X, Wei Y, Xu T, Feng J, Yan S (2017) Perceptual generative adversarial networks for small object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)
Souly N, Spampinato C, Shah M (2017) Semi supervised semantic segmentation using generative adversarial network. In: Proceedings of the IEEE international conference on computer vision (ICCV)
Zhang C, Tang Y, Zhao C, Sun Q, Ye Z, Kurths J (2021) Multitask gans for semantic segmentation and depth completion with cycle consistency. IEEE Trans Neural Netw Learn Syst 32(12):5404–5415. https://doi.org/10.1109/TNNLS.2021.3072883
Chopra S, Hadsell R, LeCun Y (2005) Learning a similarity metric discriminatively, with application to face verification. In: 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), vol. 1, pp. 539–5461. https://doi.org/10.1109/CVPR.2005.202
Ning J, Yang J, Jiang S, Zhang L, Yang M-H (2016) Object tracking via dual linear structured svm and explicit feature map, pp. 4266–4274. https://doi.org/10.1109/CVPR.2016.462
Suwendi A, Allebach JP (2008) Nearest-neighbor and bilinear resampling factor estimation to detect blockiness or blurriness of an image. J Electron Imaging 17(2):023005. https://doi.org/10.1117/1.2912053
Sohl-Dickstein J, Weiss E, Maheswaranathan N, Ganguli S (2015) Deep unsupervised learning using nonequilibrium thermodynamics. In: Bach, F., Blei, D. (eds.) Proceedings of the 32nd International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 37, pp. 2256–2265. PMLR, Lille, France. https://proceedings.mlr.press/v37/sohl-dickstein15.html
Ho J, Jain A, Abbeel P (2020) Denoising diffusion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851. Curran Associates, Inc
Ho J, Saharia C, Chan W, Fleet DJ, Norouzi M, Salimans T (2022) Cascaded diffusion models for high fidelity image generation. J Mach Learn Res 23 (1)
Dhariwal P, Nichol A (2021) Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 8780–8794. Curran Associates, Inc
Gu S, Chen D, Bao J, Wen F, Zhang B, Chen D, Yuan L, Guo B (2022) Vector quantized diffusion model for text-to-image synthesis. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 10696–10706
Peng D, Hu P, Ke Q, Liu J (2023) Diffusion-based image translation with label guidance for domain adaptive semantic segmentation. In: Proceedings of the IEEE/CVF international conference on computer vision (ICCV), pp. 808–820
Yang X, Wang X (2023) Diffusion model as representation learner. In: Proceedings of the IEEE/CVF International conference on computer vision (ICCV), pp. 18938–18949
Chen S, Sun P, Song Y, Luo P (2023) Diffusiondet: Diffusion model for object detection. In: Proceedings of the IEEE/CVF International conference on computer vision (ICCV), pp. 19830–19843
Luo R, Song Z, Ma L, Wei J, Yang W, Yang M (2023) Diffusiontrack: Diffusion model for multi-object tracking. arXiv preprint arXiv:2308.09905
Song J, Meng C, Ermon S (2021) Denoising diffusion implicit models. In: 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, OpenReview.net, (2021). https://openreview.net/forum?id=St1giarCHLP
Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L (2009) Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. https://doi.org/10.1109/CVPR.2009.5206848
Huang L, Zhao X, Huang K (2021) Got-10k: a large high-diversity benchmark for generic object tracking in the wild. IEEE Trans Pattern AnalMach Intell 43(5):1562–1577. https://doi.org/10.1109/TPAMI.2019.2957464
Fan H, Lin L, Yang F, Chu P, Deng G, Yu S, Bai H, Xu Y, Liao C, Ling H (2019) Lasot: A high-quality benchmark for large-scale single object tracking. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5369–5378. IEEE Computer Society, Los Alamitos, CA, USA. https://doi.org/10.1109/CVPR.2019.00552
Kingma DP, Welling M (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114
Kullback S, Leibler RA (1951) On information and sufficiency. Ann Math Stat 22(1):79–86
Article MathSciNet Google Scholar
Razavi A, Oord A, Vinyals O (2019) Generating diverse high-fidelity images with vq-vae-2. Advances in neural information processing systems 32
Cai L, Gao H, Ji S (2019) Multi-stage variational auto-encoders for coarse-to-fine image generation. In: Proceedings of the 2019 SIAM International Conference on Data Mining, pp. 630–638. SIAM
Bao J, Chen D, Wen F, Li H, Hua G (2017) Cvae-gan: fine-grained image generation through asymmetric training. In: Proceedings of the IEEE international conference on computer vision, pp. 2745–2754
Chen X, Sun Y, Zhang M, Peng D (2020) Evolving deep convolutional variational autoencoders for image classification. IEEE Trans Evol Comput 25(5):815–829
Chamain LD, Qi S, Ding Z (2022) End-to-end image classification and compression with variational autoencoders. IEEE Internet Things J 9(21):21916–21931
Tao R, Gavves E, Smeulders AWM (2016) Siamese instance search for tracking. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1420–1429. https://doi.org/10.1109/CVPR.2016.158
Kristan M, Leonardis A, Matas J, Felsberg M, Pflugfelder R, Cehovin Zajc L, Vojir T, Bhat G, Lukezic A, Eldesokey A, Fernandez G, Garcia-Martin A, Iglesias-Arias A, Aydin Alatan A, Gonzalez-Garcia A, Petrosino A, Memarmoghadam A, Vedaldi A, Muhic A, He A, Smeulders A, Perera AG, Li B, Chen B, Kim C, Xu C, Xiong C, Tian C, Luo C, Sun C, Hao C, Kim D, Mishra D, Chen D, Wang D, Wee D, Gavves E, Gundogdu E, Velasco-Salido E, Shahbaz Khan F, Yang F, Zhao F, Li F, Battistone F, De Ath G, Subrahmanyam GRKS, Bastos G, Ling H, Kiani Galoogahi H, Lee H, Li H, Zhao H, Fan H, Zhang H, Possegger H, Li H, Lu H, Zhi H, Li H, Lee H, Jin Chang H, Drummond I, Valmadre J, Spencer Martin J, Chahl J, Young Choi J, Li J, Wang J, Qi J, Sung J, Johnander J, Henriques J, Choi J, Weijer J, Rodriguez Herranz J, Martinez JM, Kittler J, Zhuang J, Gao J, Grm K, Zhang L, Wang L, Yang L, Rout L, Si L, Bertinetto L, Chu L, Che M, Edoardo Maresca M, Danelljan M, Yang M-H, Abdelpakey M, Shehata M, Kang M, Lee N, Wang N, Miksik O, Moallem P, Vicente-Monivar P, Senna P, Li P, Torr P, Mariam Raju P, Ruihe Q, Wang Q, Zhou Q, Guo Q, Martin-Nieto R, Krishna Gorthi R, Tao R, Bowden R, Everson R, Wang R, Yun S, Choi S, Vivas S, Bai S, Huang S, Wu S, Hadfield S, Wang S, Golodetz S, Ming T, Xu T, Zhang T, Fischer T, Santopietro V, Struc V, Wei W, Zuo W, Feng W, Wu W, Zou W, Hu W, Zhou W, Zeng W, Zhang X, Wu X, Wu X-J, Tian X, Li Y, Lu Y, Wei Law Y, Wu Y, Demiris Y, Yang Y, Jiao Y, Li Y, Zhang Y, Sun Y, Zhang Z, Zhu Z, Feng Z-H, Wang Z, He Z (2018) The sixth visual object tracking vot2018 challenge results. In: Proceedings of the European Conference on Computer Vision (ECCV) Workshops
Khan Z, Balch T, Dellaert F (2005) Mcmc-based particle filtering for tracking a variable number of interacting targets. IEEE Trans Pattern Anal Mach Intell 27(11):1805–1819. https://doi.org/10.1109/TPAMI.2005.223
Tomczak J, Welling M (2018) Vae with a vampprior. In: Storkey, A., Perez-Cruz, F. (eds.) Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 84, pp. 1214–1223. PMLR. https://proceedings.mlr.press/v84/tomczak18a.html
Zhou T, Porikli F, Crandall DJ, Van Gool L, Wang W (2023) A survey on deep learning technique for video segmentation. IEEE Trans Pattern Anal Mach Intell 45(6):7099–7122. https://doi.org/10.1109/TPAMI.2022.3225573
Xu J, Wang X (2021) Rethinking self-supervised correspondence learning: A video frame-level similarity perspective. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 10055–10065. IEEE Computer Society, Los Alamitos, CA, USA. https://doi.org/10.1109/ICCV48922.2021.00992
Kay W, Carreira J, Simonyan K, Zhang B, Hillier C, Vijayanarasimhan S, Viola F, Green T, Back T, Natsev A, Suleyman M, Zisserman A (2017) The kinetics human action video dataset. abs/1705.06950
Vondrick C, Shrivastava A, Fathi A, Guadarrama S, Murphy K (2018) Tracking emerges by colorizing videos. In: Proceedings of the European conference on computer vision (ECCV)
Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L (2015) ImageNet Large Scale Visual Recognition Challenge. Int J Comput Vis (IJCV) 115(3):211–252. https://doi.org/10.1007/s11263-015-0816-y
Wu Y, Lim J, Yang M-H (2013) Online object tracking: A benchmark. In: 2013 IEEE Conference on computer vision and pattern recognition, pp. 2411–2418. https://doi.org/10.1109/CVPR.2013.312
Perazzi F, Pont-Tuset J, McWilliams B, Van Gool L, Gross M, Sorkine-Hornung A (2016) A benchmark dataset and evaluation methodology for video object segmentation. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp. 724–732. https://doi.org/10.1109/CVPR.2016.85
Milan A, Leal-Taixé L, Reid ID, Roth S, Schindler K (2016) Mot16: A benchmark for multi-object tracking. ArXiv abs/1603.00831
Voigtlaender P, Krause M, Osep A, Luiten J, Sekar BBG, Geiger A, Leibe B (2019) Mots: Multi-object tracking and segmentation. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7934–7943. https://doi.org/10.1109/CVPR.2019.00813
Andriluka M, Iqbal U, Insafutdinov E, Pishchulin L, Milan A, Gall J, Schiele B (2018) Posetrack: A benchmark for human pose estimation and tracking. In: 2018 IEEE/CVF conference on computer vision and pattern recognition, pp. 5167–5176. https://doi.org/10.1109/CVPR.2018.00542
Zhang Z, Peng H (2019) Deeper and wider siamese networks for real-time visual tracking. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4591–4600
Zitnick CL, Dollár P (2014) Edge boxes: Locating object proposals from edges. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T (eds) Computer Vision - ECCV 2014. Springer, Cham, pp 391–405
Chen T, Kornblith S, Norouzi M, Hinton G (2020) A simple framework for contrastive learning of visual representations. In: III, H.D., Singh, A. (eds.) Proceedings of the 37th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 119, pp. 1597–1607. PMLR, https://proceedings.mlr.press/v119/chen20j.html
He K, Fan H, Wu Y, Xie S, Girshick R (2020) Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9729–9738
Finn C, Abbeel P, Levine S (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 70, pp. 1126–1135. PMLR. https://proceedings.mlr.press/v70/finn17a.html
Danelljan M, Bhat G, Khan F, Felsberg M (2019) Atom: Accurate tracking by overlap maximization. In: 2019 IEEE/CVF Conference on computer vision and pattern recognition (CVPR), pp. 4655–4664. IEEE Computer Society, Los Alamitos, CA, USA. https://doi.org/10.1109/CVPR.2019.00479
Li B, Wu W, Wang Q, Zhang F, Xing J, Yan J (2019) Siamrpn++: Evolution of siamese visual tracking with very deep networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern Recognition (CVPR)
Bhat G, Lawin FJ, Danelljan M, Robinson A, Felsberg M, Van Gool L, Timofte R (2020) Learning what to learn for video object segmentation. In: Computer vision–ECCV 2020: 16th European conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16, pp. 777–794. Springer
Hinton G Vinyals O, Dean J (2015) Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531
Li Z, Hoiem D (2017) Learning without forgetting. IEEE Trans Pattern Anal Mach Intell 40(12):2935–2947
Lin T, Goyal P, Girshick R, He K, Dollar P (2017) Focal loss for dense object detection. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2999–3007. IEEE Computer Society, Los Alamitos, CA, USA. https://doi.org/10.1109/ICCV.2017.324
Sung K-K, Poggio T (1998) Example-based learning for view-based human face detection. IEEE Trans Pattern Anal Mach Intell 20(1):39–51. https://doi.org/10.1109/34.655648
Girshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587
Ma C, Huang J-B, Yang X, Yang M-H (2015) Hierarchical convolutional features for visual tracking. In: 2015 IEEE international conference on computer vision (ICCV), pp. 3074–3082. https://doi.org/10.1109/ICCV.2015.352
Henriques JF, Caseiro R, Martins P, Batista J (2015) High-speed tracking with kernelized correlation filters. IEEE Trans Pattern Anal Mach Intell 37(3):583–596. https://doi.org/10.1109/TPAMI.2014.2345390
Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9. https://doi.org/10.1109/CVPR.2015.7298594
Fan H, Bai H, Lin L, Yang F, Chu P, Deng G, Yu S, Harshit Huang M, Liu J, Xu Y, Liao C, Yuan L, Ling H (2021) Lasot: A high-quality large-scale single object tracking benchmark. Int. J. Comput. Vision 129(2), 439–461 https://doi.org/10.1007/s11263-020-01387-y
Muller M, Bibi A, Giancola S, Alsubaihi S, Ghanem B (2018) Trackingnet: A large-scale dataset and benchmark for object tracking in the wild. In: Ferrari V, Hebert M, Sminchisescu C, Weiss Y (eds) Computer Vision - ECCV 2018. Springer, Cham, pp 310–327
Lin T-Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL (2014) Microsoft coco: Common objects in context. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T (eds) Computer Vision - ECCV 2014. Springer, Cham, pp 740–755
Mueller M, Smith N, Ghanem B (2016) A benchmark and simulator for uav tracking. In: Computer Vision–ECCV 2016: 14th European conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I 14, pp. 445–461. Springer
Li A, Lin M, Wu Y, Yang M-H, Yan S (2016) Nus-pro: A new visual tracking challenge. IEEE Trans Pattern Anal Mach Intell 38(2):335–349. https://doi.org/10.1109/TPAMI.2015.2417577
Galoogahi H, Fagg A, Huang C, Ramanan D, Lucey S (2017) Need for speed: A benchmark for higher frame rate object tracking. In: 2017 IEEE international conference on computer vision (ICCV), pp. 1134–1143. IEEE Computer Society, Los Alamitos, CA, USA. https://doi.org/10.1109/ICCV.2017.128
Silberman N, Hoiem D, Kohli P, Fergus R (2012) Indoor segmentation and support inference from rgbd images. In: Fitzgibbon A, Lazebnik S, Perona P, Sato Y, Schmid C (eds) Computer Vision - ECCV 2012. Springer, Berlin, Heidelberg, pp 746–760
Xu N, Yang L, Fan Y, Yang J, Yue D, Liang Y, Price B, Cohen S, Huang T (2018) Youtube-vos: Sequence-to-sequence video object segmentation. In: Proceedings of the European Conference on Computer Vision (ECCV)
Real E, Shlens J, Mazzocchi S, Pan X, Vanhoucke V (2017) Youtube-boundingboxes: A large high-precision human-annotated data set for object detection in video. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7464–7473. IEEE Computer Society, Los Alamitos, CA, USA. https://doi.org/10.1109/CVPR.2017.789
Kristan M, Leonardis A, Matas J, Felsberg M, Pflugfelder R, Zajc L, Vojír T, Häger G, Lukežič A, Fernandez Dominguez G, Gupta A, Petrosino A, Memarmoghadam A, Garcia-Martin A, Montero A, Vedaldi A, Robinson A, Ma A, Varfolomieiev A, Chi Z (2016) The visual object tracking vot2016 challenge results, vol. 9914, pp. 777–823. https://doi.org/10.1007/978-3-319-48881-3_54
Kristan M, Leonardis A, Matas J, Felsberg M, Pflugfelder R, Zajc LC, Vojír T, Häger G, Lukežic A, Eldesokey A, Fernández G, Garcia-Martin A, Muhic A, Petrosino A, Memarmoghadam A, Vedaldi A, Manzanera A, Tran A, Alatan A, Mocanu B, Chen B, Huang C, Xu C, Sun C, Du D, Zhang D, Du D, Mishra D, Gundogdu E, Velasco-Salido E, Khan FS, Battistone F, Subrahmanyam GRKS, Bhat G, Huang G, Bastos G, Seetharaman G, Zhang H, Li H, Lu H, Drummond I, Valmadre J, Jeong J-c, Cho J-i, Lee J-Y, Noskova J, Zhu J, Gao J, Liu J, Kim J-W, Henriques JF, Martínez JM, Zhuang J, Xing J, Gao J, Chen K, Palaniappan K, Lebeda K, Gao K, Kitani KM, Zhang L, Wang L, Yang L, Wen L, Bertinetto L, Poostchi M, Danelljan M, Mueller M, Zhang M, Yang M-H, Xie N, Wang N, Miksik O, Moallem P, Venugopal PM, Senna P, Torr PHS, Wang Q, Yu Q, Huang Q, Martín-Nieto R, Bowden R, Liu R, Tapu R, Hadfield S, Lyu S, Golodetz S, Choi S, Zhang T, Zaharia T, Santopietro V, Zou W, Hu W, Tao W, Li W, Zhou W, Yu X, Bian X, Li Y, Xing Y, Fan Y, Zhu Z, Zhang Z, He Z (2017) The visual object tracking vot2017 challenge results. In: 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), pp. 1949–1972. https://doi.org/10.1109/ICCVW.2017.230
Kristan, M., Matas, J., Leonardis, A., Felsberg, M., Pflugfelder, R., Kämäräinen, J.-K., Zajc, L.C., Drbohlav, O., Lukezic, A., Berg, A., Eldesokey, A., Käpylä, J., Fernández, G., Gonzalez-Garcia, A., Memarmoghadam, A., Lu, A., He, A., Varfolomieiev, A., Chan, A., Tripathi, A.S., Smeulders, A., Pedasingu, B.S., Chen, B.X., Zhang, B., Wu, B., Li, B., He, B., Yan, B., Bai, B., Li, B., Li, B., Kim, B.H., Ma, C., Fang, C., Qian, C., Chen, C., Li, C., Zhang, C., Tsai, C.-Y., Luo, C., Micheloni, C., Zhang, C., Tao, D., Gupta, D., Song, D., Wang, D., Gavves, E., Yi, E., Khan, F.S., Zhang, F., Wang, F., Zhao, F., Ath, G.D., Bhat, G., Chen, G., Wang, G., Li, G., Cevikalp, H., Du, H., Zhao, H., Saribas, H., Jung, H.M., Bai, H., Yu, H., Peng, H., Lu, H., Li, H., Li, J., Li, J., Fu, J., Chen, J., Gao, J., Zhao, J., Tang, J., Li, J., Wu, J., Liu, J., Wang, J., Qi, J., Zhang, J., Tsotsos, J.K., Lee, J.H., Weijer, J.v.d., Kittler, J., Lee, J.H., Zhuang, J., Zhang, K., Wang, K., Dai, K., Chen, L., Liu, L., Guo, L., Zhang, L., Wang, L., Wang, L., Zhang, L., Wang, L., Zhou, L., Zheng, L., Rout, L., Gool, L.V., Bertinetto, L., Danelljan, M., Dunnhofer, M., Ni, M., Kim, M.Y., Tang, M., Yang, M.-H., Paluru, N., Martinel, N., Xu, P., Zhang, P., Zheng, P., Zhang, P., Torr, P.H.S., Wang, Q.Z.Q., Guo, Q., Timofte, R., Gorthi, R.K., Everson, R., Han, R., Zhang, R., You, S., Zhao, S.-C., Zhao, S., Li, S., Li, S., Ge, S., Bai, S., Guan, S., Xing, T., Xu, T., Yang, T., Zhang, T., Vojir, T., Feng, W., Hu, W., Wang, W., Tang, W., Zeng, W., Liu, W., Chen, X., Qiu, X., Bai, X., Wu, X.-J., Yang, X., Chen, X., Li, X., Sun, X., Chen, X., Tian, X., Tang, X., Zhu, X.-F., Huang, Y., Chen, Y., Lian, Y., Gu, Y., Liu, Y., Chen, Y., Zhang, Y., Xu, Y., Wang, Y., Li, Y., Zhou, Y., Dong, Y., Xu, Y., Zhang, Y., Li, Y., Luo, Z.W.Z., Zhang, Z., Feng, Z.-H., He, Z., Song, Z., Chen, Z., Zhang, Z., Wu, Z., Xiong, Z., Huang, Z., Teng, Z., Ni, Z.: The seventh visual object tracking vot2019 challenge results. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 2206–2241 (2019). https://doi.org/10.1109/ICCVW.2019.00276
Kristan M, Leonardis A, Matas J, Felsberg M, Pflugfelder R, Kämäräinen J-K, Danelljan M, Zajc LČ, Lukežič A, Drbohlav O, He L, Zhang Y, Yan S, Yang J, Fernández G, Hauptmann A, Memarmoghadam A, García-Martín Á, Robinson A, Varfolomieiev A, Gebrehiwot AH, Uzun B, Yan B, Li B, Qian C, Tsai C-Y, Micheloni C, Wang D, Wang F, Xie F, Lawin FJ, Gustafsson F, Foresti GL, Bhat G, Chen G, Ling H, Zhang H, Cevikalp H, Zhao H, Bai H, Kuchibhotla HC, Saribas H, Fan H, Ghanei-Yakhdan H, Li H, Peng H, Lu H, Li H, Khaghani J, Bescos J, Li J, Fu J, Yu J, Xu J, Kittler J, Yin J, Lee J, Yu K, Liu K, Yang K, Dai K, Cheng L, Zhang L, Wang L, Wang L, Van Gool L, Bertinetto L, Dunnhofer M, Cheng M, Dasari MM, Wang N, Wang N, Zhang P, Torr PHS, Wang Q, Timofte R, Gorthi RKS, Choi S, Marvasti-Zadeh SM, Zhao S, Kasaei S, Qiu S, Chen S, Schön TB, Xu T, Lu W, Hu W, Zhou W, Qiu X, Ke X, Wu X-J, Zhang X, Yang X, Zhu X, Jiang Y, Wang Y, Chen Y, Ye Y, Li Y, Yao Y, Lee Y, Gu Y, Wang Z, Tang Z, Feng Z-H, Mai Z, Zhang Z, Wu Z, Ma Z (2020) The eighth visual object tracking vot2020 challenge results. In: Bartoli A, Fusiello A (eds) Computer Vision - ECCV 2020 Workshops. Springer, Cham, pp 547–601
Li Z, Tao R, Gavves E, Snoek CGM, Smeulders AWM (2017) Tracking by natural language specification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Li Y, Yu J, Cai Z, Pan Y (2022) Cross-modal target retrieval for tracking by natural language. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 4931–4940
Zhou L, Zhou Z, Mao K, He Z (2023) Joint visual grounding and tracking with natural language specification. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 23151–23160
Li X, Huang Y, He Z, Wang Y, Lu H, Yang M (2023) Citetracker: Correlating image and text for visual tracking. In: 2023 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 9940–9949. IEEE Computer Society, Los Alamitos, CA, USA. https://doi.org/10.1109/ICCV51070.2023.00915
Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S, Herbert-Voss A, Krueger G, Henighan T, Child R, Ramesh A, Ziegler D, Wu J, Winter C, Hesse C, Chen M, Sigler E, Litwin M, Gray S, Chess B, Clark J, Berner C, McCandlish S, Radford A, Sutskever I, Amodei D (2020) Language models are few-shot learners. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 1877–1901. Curran Associates, Inc.,
Chen K, Liu Z, Hong L, Xu H, Li Z, Yeung D-Y (2023) Mixed autoencoder for self-supervised visual representation learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 22742–22751
Dave IR, Jenni S, Shah M (2023) No More Shortcuts: Realizing the Potential of Temporal Self-Supervision
Download references
The authors did not receive support from any organization for the submitted work.
Authors and affiliations.
Department of Computer Science, Mathematics, Physics and Statistics, The University of British Columbia, 3333 University Way, Kelowna, V1V1V7, BC, Canada
Omar Abdelaziz, Mohamed Shehata & Mohamed Mohamed
You can also search for this author in PubMed Google Scholar
Conceptualization: OA, MS, MM; methodology: OA; formal analysis and investigation: OA; writing—original draft preparation: OA; writing—review and editing: OA, MS, MM; supervision: MS
Correspondence to Omar Abdelaziz .
Conflict of interest.
The authors have no relevant financial or non-financial interests to disclose.
Consent for publication.
The authors consent to publish this work in the International Journal of Machine Learning and Cybernetics.
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Reprints and permissions
Abdelaziz, O., Shehata, M. & Mohamed, M. Beyond traditional visual object tracking: a survey. Int. J. Mach. Learn. & Cyber. (2024). https://doi.org/10.1007/s13042-024-02345-7
Download citation
Received : 25 April 2024
Accepted : 19 August 2024
Published : 26 August 2024
DOI : https://doi.org/10.1007/s13042-024-02345-7
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
IMAGES
COMMENTS
NAME OF EXPERIMENT PAGE NO. 1 Study of plane table surveying equipments and accessories. 3-6 2 Three point problem in plane table surveying. 7-7 3 Determination of tacheometric constants and determination of horizontal distance and R.L of points using tachometry. 8-8 4 Determining a height of object by measuring vertical angle. 9-10 5
02 Plane Table Surveying 8 03 Theodolite Traverse Surveying 13 04 Leveling / Route Surveying 17 05 House Setting 21 06 Setting out a Simple Circular Curve on Field 24 07 Height Measurement 28 08 Stadia Survey/ Tacheometry 32 09 Contouring 36 10 Global Positioning System 40 11 Total Station 46 ...
Resection. Resection is a method of plane table surveying in which the location of the plane table is unknown and it is determined by sighting it to known points or plotted points. It is also called the method of orientation and it can be conducted by two f ield conditions as follows. The three-point problem. The two-point problem.
EXPERIMENTS LEARNING OUTCOMES ix LIST OF EXPERIMENTS x 1 Surveying of an area by chain and compass survey (closed traverse) & plotting. 1 2 Determine of distance between two inaccessible points with compass 11 3 Radiation method, intersection methods by plane table survey. 19 4 Levelling - Longitudinal and cross-section and plotting 27
Study of plane table surveying equipment's and accessories. Three point problem in plane table 6. traversing. 21 Study of Auto Level and Dumpy Level; To study different parts of a Transit Theodolite and Temporary 7. Adjustments. 25 To determine the distance between two inaccessible 8. points by the help of a Compass. 32 2 Department of civil ...
Plane Table Surveying (c) Figure 5.1 : Plane Table 5.2.3 Accessories The additional equipment to be used for surveying with plane table could be as given below : Trough Compass It is usually 15 cm long, shown in Figure 5.2(a), and is provided to plot the magnetic meridian (N-S direction) to facilitate orientation of the plane table
The plane table surveying is one of the fastest and easiest methods of surveying. Plotting of plans and field observations can be done at the same time in plane table surveying. It is useful for the following cases: It is best fitted for small-scale surveying i.e. any types of fields. It is also used in surveying industrial areas where compass ...
Plane table at O 1 Plane table at O 2 Fig. 14.8. Intersection method of plane tabling This method is commonly employed for locating: (a) details (b) the distant and inaccessible points (c) the stations which may be used latter. 14.3.3 Traversing This is the method used for locating plane table survey stations. In this method, ray is drawn to next
Plane Table Surveying. In this method of surveying a table top, similar to drawing board fitted on to a tripod is the main instrument. A drawing sheet is fixed on to the table top, the observations are made to the objects, distances are scaled down and the objects are plotted in the field itself. Since the plotting is made in the field itself ...
The necessary equipment of the plane table survey is mentioned below. Also, brief discussion and photographs of major ins…. What is Plane Table Surveying? Setup & Methods. The plane table surveying is one of the fastest and easiest methods of surveying. Plotting of plans and field observations can be done at the same time in plane table ...
Plane table surveying is a graphical method of survey in which the field observations and plotting are done simultaneously. It is simple and cheaper than Theodolite survey but it is mostly suitable for small scale survey. The plan on drawing sheet drawn by surveyor in the field itself so there is chance of occurrence of any mistake is very less
The plane table surveying is that method of surveying in which the fieldwork and plotting. work is done simultaneously, and no office work is necessarily required. The plane tabling is generally adapted for surveys in which high precision is not required. It. is mainly employed for small-scale or medium size mapping.
A plane table survey involves creating a small-scale map of an area using a plane table, alidade, and other surveying equipment. The surveyor will set up the plane table on a tripod and use the alidade to take bearings on various points in the survey area. These bearings are then plotted on the plane table to create a map of the area.
The plane table surveying is a simple and easy method where the plotting and field observations are done simultaneously. The map of the land prepared while doing the fieldwork. Moreover, plane table surveying could help to measure the land area where the compass survey method may fail due to the magnetic field. Principle of plane table survey
This type of surveying is carried out in the small area with larger scale.The study area for plane table survey was near Kathmandu University premises which include both manmade and natural features.
Center and level the plane table over P. Mark the direction of the North on the sheet by using compass. Locate instrument station p on the sheet by using plumbing fork, such that p on sheet is exactly over P on ground. Centering the alidade on point p sight various details step by step and draw a ray from each detail along the fiducial edge of ...
Thus, surveying is a basic requirement for all Civil Engineering projects. Based upon the consideration of the shape of the earth, surveying is broadly classified as geodetic surveying and plane surveying. Most of the civil engineering works, concern only with a small portion of the earth which seems to be a plane surface. Based on the purpose for
The table is turned till the line of sight bisects the ranging rod at A. The board is then clamped in this position. This method is better than the previous one and it gives perfect orientation. Methods of plane table survey: Following are the four methods by which an object might be located on paper by plane table: (1) Radiation (2) Intersection
Plane table surveying is a graphical method of surveying and we can plot maps directly drawing on site. This makes this method popular. Now it is used for small or medium work where higher accuracy isn't needed. Plane table surveying gives approximate data. The principle of plane table surveying is parallelism.
Plane table surveying is a graphical method of survey in which the field observations and plotting are done simultaneously. It is simple and cheaper than The...
Introduction to plane table surveying and their components, types of plane tables. 6. Introduction to leveling instruments and their components, types of leveling instruments, correction for curvature and refraction. 7. Introduction to contours, types of contours and methods of plotting contours. ... Experiment No: 2 SURVEY OF AN AREA BY CHAIN ...
DSpace JSPUI eGyanKosh preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets
#Planetablesurvey #Intersectionmethod #instrumentusedinplanetable #SurveyEngineering #Surveypractical, #PlanetablesurveypracticalAlidadeके जिस ...
The mean stress effect remains a critical aspect in multiaxial fatigue analysis. This work presents a new criterion that, based on the classical Findley criterion, applies a material-dependent exponent to the mean normal stress term and includes the ultimate tensile stress as a fitting parameter. This way of considering the non-linear effect of the mean stress, with a material-dependent rather ...
Single object tracking is a vital task of many applications in critical fields. However, it is still considered one of the most challenging vision tasks. In recent years, computer vision, especially object tracking, witnessed the introduction or adoption of many novel techniques, setting new fronts for performance. In this survey, we visit some of the cutting-edge techniques in vision, such as ...