<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE root>
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ali="http://www.niso.org/schemas/ali/1.0/" article-type="research-article" dtd-version="1.2" xml:lang="en"><front><journal-meta><journal-id journal-id-type="publisher-id">Programming and Computer Software</journal-id><journal-title-group><journal-title xml:lang="en">Programming and Computer Software</journal-title><trans-title-group xml:lang="ru"><trans-title>Программирование</trans-title></trans-title-group></journal-title-group><issn publication-format="print">0132-3474</issn><issn publication-format="electronic">3034-5847</issn><publisher><publisher-name xml:lang="en">The Russian Academy of Sciences</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="publisher-id">688102</article-id><article-id pub-id-type="doi">10.31857/S0132347425030036</article-id><article-id pub-id-type="edn">GQWVAO</article-id><article-categories><subj-group subj-group-type="toc-heading" xml:lang="en"><subject>COMPUTER GRAFICS AND VISUALIZATION</subject></subj-group><subj-group subj-group-type="toc-heading" xml:lang="ru"><subject>КОМПЬЮТЕРНАЯ ГРАФИКА И ВИЗУАЛИЗАЦИЯ</subject></subj-group><subj-group subj-group-type="article-type"><subject>Research Article</subject></subj-group></article-categories><title-group><article-title xml:lang="en">Reconstruction of optical properties of real scene objects from images by taking into account secondary illumination and selecting the most important points</article-title><trans-title-group xml:lang="ru"><trans-title>Реконструкция оптических свойств объектов реальной сцены по изображениям с учетом вторичного освещения и выбором наиболее важных точек</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author"><contrib-id contrib-id-type="orcid">https://orcid.org/0009-0006-8623-3578</contrib-id><name-alternatives><name xml:lang="en"><surname>Kupriyanov</surname><given-names>S. I.</given-names></name><name xml:lang="ru"><surname>Куприянов</surname><given-names>С. И.</given-names></name></name-alternatives><address><country country="RU">Russian Federation</country></address><email>stasz776@gmail.com</email><xref ref-type="aff" rid="aff1"/></contrib><contrib contrib-type="author"><contrib-id contrib-id-type="orcid">https://orcid.org/0000-0003-2929-1203</contrib-id><name-alternatives><name xml:lang="en"><surname>Kinev</surname><given-names>I. E.</given-names></name><name xml:lang="ru"><surname>Кинёв</surname><given-names>И. Е.</given-names></name></name-alternatives><address><country country="RU">Russian Federation</country></address><email>igorkinevitmo@gmail.com</email><xref ref-type="aff" rid="aff1"/></contrib></contrib-group><aff-alternatives id="aff1"><aff><institution xml:lang="en">National Research ITMO University</institution></aff><aff><institution xml:lang="ru">Национальный исследовательский университет ИТМО 197101</institution></aff></aff-alternatives><pub-date date-type="pub" iso-8601-date="2025-07-04" publication-format="electronic"><day>04</day><month>07</month><year>2025</year></pub-date><issue>3</issue><issue-title xml:lang="en"/><issue-title xml:lang="ru"/><fpage>27</fpage><lpage>39</lpage><history><date date-type="received" iso-8601-date="2025-07-22"><day>22</day><month>07</month><year>2025</year></date><date date-type="accepted" iso-8601-date="2025-07-22"><day>22</day><month>07</month><year>2025</year></date></history><permissions><copyright-statement xml:lang="en">Copyright ©; 2025, Russian Academy of Sciences</copyright-statement><copyright-statement xml:lang="ru">Copyright ©; 2025, Российская академия наук</copyright-statement><copyright-year>2025</copyright-year><copyright-holder xml:lang="en">Russian Academy of Sciences</copyright-holder><copyright-holder xml:lang="ru">Российская академия наук</copyright-holder></permissions><self-uri xlink:href="https://transsyst.ru/0132-3474/article/view/688102">https://transsyst.ru/0132-3474/article/view/688102</self-uri><abstract xml:lang="en"><p>This paper presents a method for reconstructing the optical properties of objects in a real scene, based on a series of its images with the use of differentiable rendering. The main goal of this study is to develop an approach that enables the high-accuracy reconstruction of the optical characteristics of scene objects while minimizing the computational costs. Introduction considers the relevance of creating realistic models of virtual scenes for computer graphics, as well as their application in virtual reality, augmented reality, and animation. It is noted that, in order to achieve image realism, it is necessary to take into account the scene geometry, illumination parameters, and optical properties of objects. In this study, it is assumed that the scene geometry and light sources are known, and the main task is to reconstruct the optical properties of objects. Section 3 describes the main stages of the proposed approach. The first stage involves data preprocessing, during which the key image points characterized by high brightness and uniform distribution over scene objects are selected. This significantly reduces the amount of data required for optimization. Next, using numerical differentiation and backward ray tracing, luminance gradients are calculated based on the model parameters. The proposed algorithm takes into account both primary and secondary illumination, which improves the accuracy of reconstructing the optical characteristics of the scene. At the final stage, the parameters of the optical models are reconstructed using the ADAM method, improved with the Optuna library for automatic hyperparameter selection. Section 4 describes the experiments carried out on the Cornell Box scene. The result of reconstructing the optical properties is considered and the original and reconstructed luminances are compared. Certain limitations due to the duration of calculations and the sensitivity to data outliers are identified and discussed in detail. In Conclusions, the results are summarized and directions for further development are outlined, including the transfer of calculations to the GPU and the use of more complex models of optical properties to improve the accuracy and speed of the algorithm.</p></abstract><trans-abstract xml:lang="ru"><p>В статье представлен метод реконструкции оптических свойств объектов реальной сцены по ряду ее изображений, основанный на использовании методов дифференцируемого рендеринга. Основной целью исследования является разработка подхода, позволяющего с высокой точностью восстановить оптические характеристики объектов сцены при минимизации вычислительных затрат. Во введении описана актуальность создания реалистичных виртуальных моделей сцен для компьютерной графики и их применения в таких областях, как виртуальная и дополненная реальность, анимация. Отмечено, что для достижения реализма изображения необходимо учитывать геометрию сцены, параметры освещения и оптические свойства объектов. В данной работе предполагается, что геометрия сцены и источники света известны, а основной задачей является восстановление оптических свойств объектов. Раздел “Методы” описывает основные этапы предложенного подхода. Первая стадия включает предварительную обработку данных, в ходе которой осуществляется выбор ключевых точек изображения, характеризующихся высокой яркостью и равномерным распределением по объектам сцены. Это позволяет значительно сократить объем данных, необходимых для оптимизации. Далее, используя численное дифференцирование и обратную трассировку лучей, вычисляются градиенты яркости по параметрам модели. Предложенный алгоритм учитывает как первичное, так и вторичное освещение, что повышает точность восстановления оптических характеристик сцены. На завершающем этапе параметры оптических моделей восстанавливаются с помощью метода Adam, улучшенного с использованием библиотеки Optuna для автоматического подбора гиперпараметров. В разделе результатов представлены эксперименты, выполненные на сцене Cornell Box. Демонстрируется результат восстановления оптических свойств и сравниваются оригинальная и восстановленная яркости. Выявлены ограничения, связанные с длительностью вычислений и чувствительностью к выбросам данных, которые подробно рассмотрены в работе. В заключении подведены итоги и предложены направления для дальнейшего развития, включая перенос вычислений на GPU и использование более сложных моделей оптических свойств для повышения точности и скорости алгоритма.</p></trans-abstract><kwd-group xml:lang="en"><kwd>rendering</kwd><kwd>differentiable rendering</kwd><kwd>ray tracing</kwd><kwd>light scattering</kwd><kwd>and optical property reconstruction</kwd></kwd-group><kwd-group xml:lang="ru"><kwd>рендеринг</kwd><kwd>дифференцируемый рендеринг</kwd><kwd>трассировка лучей</kwd><kwd>рассеивание света</kwd><kwd>реконструкция оптических свойств</kwd></kwd-group><funding-group/></article-meta></front><body></body><back><ref-list><ref id="B1"><label>1.</label><citation-alternatives><mixed-citation xml:lang="en">Veach E. Robust Monte Carlo methods for light transport simulation, PhD Dissertation, Stanford University, 1998.</mixed-citation><mixed-citation xml:lang="ru">Veach E. Robust monte carlo methods for light transport simulation. Ph.D. Dissertation, Stanford University. 1998. P. 406.</mixed-citation></citation-alternatives></ref><ref id="B2"><label>2.</label><citation-alternatives><mixed-citation xml:lang="en">Bogolepov D.K., Ulyanov D. GPU-optimized bidirectional path tracing, Proceedings of the 21th International.Conference Central Europe on Computer Graphics, Visualization, and Computer Vision, 2013, p. 15.</mixed-citation><mixed-citation xml:lang="ru">Bogolepov D.K., Ulyanov D. GPU-Optimized Bidirectional Path Tracing. In Proc. of the 21th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision. 2013. P. 15.</mixed-citation></citation-alternatives></ref><ref id="B3"><label>3.</label><citation-alternatives><mixed-citation xml:lang="en">Veach E., Guibas L.J. Metropolis light transport, Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques–SIGGRAPH’97, New York: ACM Press/Addison-Wesley, 1997, pp. 65–76. https://doi.org/10.1145/258734.258775</mixed-citation><mixed-citation xml:lang="ru">Veach E., Guibas L.J. Metropolis Light Transport. In Proc. of the of the 24th Annual Conference on Computer Graphics and Interactive Techniques. 1997. P. 65–76.</mixed-citation></citation-alternatives></ref><ref id="B4"><label>4.</label><citation-alternatives><mixed-citation xml:lang="en">Bitterli B., Jakob W., Novák J., Jarosz W. Reversible jump metropolis light transport using inverse mappings, ACM Trans. Graphics, 2017, vol. 37, no. 1, p. 1. https://doi.org/10.1145/3132704</mixed-citation><mixed-citation xml:lang="ru">Bitterli B., Jakob W., Novak J., Jarosz W. Reversible Jump Metropolis Light Transport Using Inverse Mappings. ACM Transactions on Graphics. 2017. T. 37. № 1. P. 1–12.</mixed-citation></citation-alternatives></ref><ref id="B5"><label>5.</label><citation-alternatives><mixed-citation xml:lang="en">Gruson A., West R., Hachisuka T. Stratified Markov chain Monte Carlo light transport, Comput. Graphics Forum, 2020, vol. 39, no. 2, pp. 351–362. https://doi.org/10.1111/cgf.13935</mixed-citation><mixed-citation xml:lang="ru">Gruson A., West R., Hachisuka T. Stratified Markov Chain Monte Carlo Light Transport. Computer Graphics Forum. 2020. V. 39. № 2. P. 351–362.</mixed-citation></citation-alternatives></ref><ref id="B6"><label>6.</label><citation-alternatives><mixed-citation xml:lang="en">Jensen H.W. Global illumination using photon maps, Rendering Techniques ’96, Pueyo X. and Schröder P., Eds., Vienna: Springer, 1996, pp. 21–30. https://doi.org/10.1007/978-3-7091-7484-5_3</mixed-citation><mixed-citation xml:lang="ru">Jensen H.W. Global illumination using photon maps. Eurographics workshop on Rendering techniques. Springer, Vienna. 1996. P. 21–30.</mixed-citation></citation-alternatives></ref><ref id="B7"><label>7.</label><citation-alternatives><mixed-citation xml:lang="en">Kato H., Beker D., Morariu M., Ando T., Matsuoka T., Kehl W., Gaidon A. Differentiable rendering: A survey, arXiv Preprint, 2015. https://doi.org/10.48550/arXiv.2006.12057</mixed-citation><mixed-citation xml:lang="ru">Kato H., Beker D., Morariu M., Ando T., Matsuoka T., Kehl W., Gaidon A. Differentiable Rendering: A Survey. 2015.</mixed-citation></citation-alternatives></ref><ref id="B8"><label>8.</label><citation-alternatives><mixed-citation xml:lang="en">Phong B.T. Illumination for computer generated pictures, Commun. ACM, 1975, vol. 18, no. 6, pp. 311–317. https://doi.org/10.1145/360825.360839</mixed-citation><mixed-citation xml:lang="ru">Phong B.T. Illumination for computer generated pictures. Communications of ACM 18. 1975. V. 6. P. 311–317.</mixed-citation></citation-alternatives></ref><ref id="B9"><label>9.</label><citation-alternatives><mixed-citation xml:lang="en">Cook R.L., Torrance K.E. A reflectance model for computer graphics, ACM SIGGRAPH Computer Graphics, 1981, vol. 15, no. 3, pp. 307–316. https://doi.org/10.1145/965161.806819</mixed-citation><mixed-citation xml:lang="ru">Cook R.L., Torrance K.E. A Reflectance Model for Computer Graphics. ACM Transactions on Graphics. 1981. V. 1. № 3. P. 301–316.</mixed-citation></citation-alternatives></ref><ref id="B10"><label>10.</label><citation-alternatives><mixed-citation xml:lang="en">Burley B. Physically based shading at Disney, ACM Trans. Graphics, 2012, p. 7.</mixed-citation><mixed-citation xml:lang="ru">Burley B. Physically Based Shading at Disney. ACM Transactions on Graphics (ACM SIGGRAPH). 2012. P. 7.</mixed-citation></citation-alternatives></ref><ref id="B11"><label>11.</label><citation-alternatives><mixed-citation xml:lang="en">Loper M.M., Black M.J. OpenDR: An approximate differentiable renderer, Computer Vision–ECCV 2014, Fleet D., Pajdla T., Schiele B., Tuytelaars T., Eds., Lecture Notes in Computer Science, Cham: Springer, 2014, vol. 8695, pp. 154–169. https://doi.org/10.1007/978-3-319-10584-0_11</mixed-citation><mixed-citation xml:lang="ru">Loper M.M., Black M.J. OpenDR: An approximate differ entiable renderer. in ECCV. 2014.</mixed-citation></citation-alternatives></ref><ref id="B12"><label>12.</label><citation-alternatives><mixed-citation xml:lang="en">Kato H., Ushiku Yo., Harada T. Neural 3D mesh renderer, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, IEEE, 2018, pp. 3907–3916. https://doi.org/10.1109/cvpr.2018.00411</mixed-citation><mixed-citation xml:lang="ru">Kato H., Ushiku Y., Harada T. Neural 3D Mesh Renderer. in CVPR. 2018.</mixed-citation></citation-alternatives></ref><ref id="B13"><label>13.</label><citation-alternatives><mixed-citation xml:lang="en">Genova K., Cole F., Maschinot A., Sarna A., Vlasic D., Freeman W.T. Unsupervised training for 3D morphable model regression, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, IEEE, 2018, pp. 8377–8386. https://doi.org/10.1109/cvpr.2018.00874</mixed-citation><mixed-citation xml:lang="ru">Genova T., Cole F., Maschinot A., Sarna A., Vlasic D., Freeman W.T. Unsupervised Training for 3D Morphable Model Regression. in CVPR. 2018.</mixed-citation></citation-alternatives></ref><ref id="B14"><label>14.</label><citation-alternatives><mixed-citation xml:lang="en">Rhodin H., Robertini N., Richardt Ch., Seidel H.-P., Theobalt Ch. A versatile scene model with differentiable visibility applied to generative pose estimation, 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, 2015, IEEE, 2015, pp. 765–773. https://doi.org/10.1109/iccv.2015.94</mixed-citation><mixed-citation xml:lang="ru">Rhodin H., Robertini N., Richardt C., Seidel H.-P., Theobalt C. A. Versatile Scene Model with Differen-tiable Visibility Applied to Generative Pose Estima-tion. in ICCV. 2015.</mixed-citation></citation-alternatives></ref><ref id="B15"><label>15.</label><citation-alternatives><mixed-citation xml:lang="en">Kajiya J.T. The rendering equation, ACM SIGGRAPH Computer Graphics, 1986, vol. 20, no. 4, pp. 143–150. https://doi.org/10.1145/15886.15902</mixed-citation><mixed-citation xml:lang="ru">Kajiya, J.T. The rendering equation. ACM SIGGRAPH Computer Graphics. 1986. V. 20. № 4. P. 143–150.</mixed-citation></citation-alternatives></ref><ref id="B16"><label>16.</label><citation-alternatives><mixed-citation xml:lang="en">Li T.-M., Aittala M., Durand F., Lehtinen J. Differentiable Monte Carlo ray tracing through edge sampling, ACM Trans. Graphics, 2018, vol. 37, no. 6, p. 222. https://doi.org/10.1145/3272127.3275109</mixed-citation><mixed-citation xml:lang="ru">Li T.M., Aittala M., Durand F., Lehtinen J. Differentiable monte carlo ray tracing through edge sampling. ACM Trans. Graph. 2018. V. 37. № 6. P. 11.</mixed-citation></citation-alternatives></ref><ref id="B17"><label>17.</label><citation-alternatives><mixed-citation xml:lang="en">Zhang Ch., Wu L., Zheng Ch., Gkioulekas I., Ramamoorthi R., Zhao Sh. A differential theory of radiative transfer, ACM Trans. Graphics, 2019, vol. 38, no. 6, p. 227. https://doi.org/10.1145/3355089.3356522</mixed-citation><mixed-citation xml:lang="ru">Zhang C., Wu L., Zheng C., Gkioulekas I., Ramamoorthi R., Zhao S. A differential theory of radiative transfer. ACM Trans. Graph. 2019. V. 38. № 6. P. 16.</mixed-citation></citation-alternatives></ref><ref id="B18"><label>18.</label><citation-alternatives><mixed-citation xml:lang="en">Zhao Sh., Jakob W., Li T.-M. Physics-based differentiable rendering: From theory to implementation, SIGGRAPH 2020: ACM SIGGRAPH 2020 Courses, New York: Association for Computing Machinery, 2020, p. 14. https://doi.org/10.1145/3388769.3407454</mixed-citation><mixed-citation xml:lang="ru">Shuang Z., Wenzel J., Tzu-Mao L. Physics-Based Differentiable Rendering: From Theory to Implementation. 2020.</mixed-citation></citation-alternatives></ref><ref id="B19"><label>19.</label><citation-alternatives><mixed-citation xml:lang="en">Nimier-David M., Vicini D., Zeltner T., Jakob W. Mitsuba 2: A retargetable forward and inverse renderer, ACM Trans. Graphics, 2019, vol. 38, no. 6, p. 203. https://doi.org/10.1145/3355089.3356498</mixed-citation><mixed-citation xml:lang="ru">Merlin N., Delio V., Tizian Z., Wenzel J. Mitsuba 2: A Retargetable Forward and Inverse Renderer. 2019.</mixed-citation></citation-alternatives></ref><ref id="B20"><label>20.</label><citation-alternatives><mixed-citation xml:lang="en">Sorokin M.I., Zhdanov D.D., Zhdanov A.D., Potemin I.S., Bogdanov N.N. Restoration of lighting parameters in mixed reality systems using convolutional neural network technology based on RGBD images, Program. Comput. Software, 2020, vol. 46, no. 3, pp. 207–216. https://doi.org/10.1134/s0361768820030093</mixed-citation><mixed-citation xml:lang="ru">Сорокин М.И., Жданов Д.Д., Жданов А.Д., Потемин И.С., Богданов Н.Н. Восстановление параметров освещения в системах смешанной реальности с помощью технологии сверточных нейронных сетей по RGBD-изображениям. Программирование. 2020. № 3. С. 24–34.</mixed-citation></citation-alternatives></ref><ref id="B21"><label>21.</label><citation-alternatives><mixed-citation xml:lang="en">Kinev I.E., Kupriyanov S.I. Restoring optical properties of scene objects by differentiable rendering with optimization of most important point selection, Trudy konferentsii GrafiKon–2024 (Proc. Conf. GrafiKon–2024), 2024, pp. 179–193.</mixed-citation><mixed-citation xml:lang="ru">Кинёв И.Е., Куприянов С.И. Восстановление оптических свойств объектов сцены методом дифференцируемого рендеринга с применением оптимизации выбора наиболее важных точек. Труды конференции ГрафиКон – 2024. 2024. C. 179–193.</mixed-citation></citation-alternatives></ref><ref id="B22"><label>22.</label><citation-alternatives><mixed-citation xml:lang="en">Zhdanov D.D., Guskov K.S., Zhdanov A.D., Potemin I.S., Kulbako A.Yu., Alexandrov Yu.V., Lopatin A.V., Sokolov V.G. Using a federated approach to synthesize images of confidential scene models, Light Eng., 2024, vol. 32, no. 4, pp. 89–102. https://doi.org/10.33383/2024-017</mixed-citation><mixed-citation xml:lang="ru">Zhdanov D.D., Guskov K.S., Zhdanov A.D., Potemin I.S., Kulbako A.Y., Alexandrov Y.V., Lopatin A.V., Sokolov V.G. Using a Federated Approach to Synthesize Images of Confidential Scene Models. Light &amp; Engineering. 2024. V. 32. № 4. P. 89–102.</mixed-citation></citation-alternatives></ref><ref id="B23"><label>23.</label><citation-alternatives><mixed-citation xml:lang="en">Optuna – A hyperparameter optimization framework. https://optuna.org</mixed-citation><mixed-citation xml:lang="ru">Optuna – Ahyperparameter Optimization framework. https://optuna.org. 2024</mixed-citation></citation-alternatives></ref><ref id="B24"><label>24.</label><citation-alternatives><mixed-citation xml:lang="en">Adam – PyToch 2.5 documentation. https://pytorch.org/docs/stable/generated/torch.optim.Adam.html</mixed-citation><mixed-citation xml:lang="ru">Adam – PyToch 2.5 documentation. https://pytorch.org/docs/stable/generated/torch.optim.Adam.html. 2024</mixed-citation></citation-alternatives></ref></ref-list></back></article>
