Участник:Strijov/Drafts

Материал из MachineLearning.

Перейти к: навигация, поиск

Шаблон:Main article

Содержание

2021

Author Topic Links Consultant Letters Reviewer
Grebenkova Olga (example) Variational optimization of deep learning models with model complexity control LinkReview

GitHub Paper Slides Video

Oleg Bakhteev AILP+UXBR+HCV+TEDWSS Shokorov Vyacheslav

Review

Pilkevich Anton Existence conditions for hidden feedback loops in recommender systems GitHub

LinkReview Paper Slides Video

Khritankov Anton AILB*P-X+R-B-H1CVO*T-EM*H1WJSF Gorpinich Maria

Review

Antonina Kurdyukova| Determining the phase and disorder of human movement based on the signals of wearable devices LinkReview

GitHub Paper Slides Video

Georgy Kormakov AILB*PXBRH1CVO*TEM*WJSF Pilkevich Anton

Review

Yakovlev Konstantin A differentiable search algorithm for model architecture with control over its complexity LinkReview

GitHub Paper Slides Video

Grebenkova Olga AILB*PXBRH1CVO*TEM*WJSF Pyrau Vitaly

Review

Gorpinich Maria Trajectory Regularization of Deep Learning Model Parameters Optimization Based on Knowledge Distillation LinkReview

GitHub Paper Slides Video

Oleg Bakhteev AILB*P+XBRC+VH1O*TEM*WJSF Kulakov Yaroslav

Review

Alexandr Tolmachev Analysis of the QPFS Feature Selection Method for Generalized Linear Models LinkReview

GitHub Paper Slides Video

Aduenko Alexander AILB*PXB-R-H1CVO*TEM*WJSF Antonina Kurdyukova

Review

Kulakov Yaroslav BCI: Selection of consistent models for building a neural interface LinkReview

GitHub Paper Slides Video

Isachenko Roman AILB*PXBRH1CVO*TEM*WJ0SF Zverev Egor

Review

Pyrau Vitaly Experimental comparison of several problems of operational planning of biochemical production. LinkReview

GitHub Paper Slides Video

Trenin Sergey Alekseevich AILB*PXBRH1CVO*TEM*WJSF Yakovlev Konstantin

Review

Bazhenov Andrey Search for the boundaries of the iris by the method of circular projections LinkReview

GitHub Paper Slides Video

Matveev Ivan Alekseevich AILB*PXB0RH1CVO*TEM*WJ0SF
Zverev Egor Learning co-evolution information with natural language processing for protein folding problem LinkReview

GitHub Paper Slides Video

Sergei Grudinin, Ilya Igashov AILB*PXBRH1CVO*TEM*WJSF Alexandr Tolmachev

Review

Gorchakov Vyacheslav Importance Sampling for Chance Constrained Optimization LinkReview

Github Paper Video

Yuri Maksimov AILB*PX0B0R0H1C0V0O*0T0E0M*0W0JS0F Bazhenov Andrey

Review

Lindemann Nikita Training with an expert for a sample with many domains LinkReview

Github Paper Slides

Andrey Grabovoi AILPXBRH1C0V0O*TE0M*0W0J0SF0

Task 74

  • Name: Existence conditions for hidden feedback loops in recommender systems
  • Problem description: In recommender systems, the effect of artificially inadvertently limiting the user's choice due to the adaptation of the model to his preferences (echo chamber / filter bubble) is known. The effect is a special case of hidden feedback loops. (see - Analysis H.F.L.). It is expressed in the fact that by recommending the same objects of interest to the user, the algorithm maximizes the quality of its work. The problem is a) lack of variety b) saturation / volatility of the user's interests.
  • Task: It is clear that the algorithm does not know the interests of the user and the user is not always honest in his choice. Under what conditions, what properties of the learning algorithm and dishonesty (deviation of the user's choice from his interests) will the indicated effect be observed? Clarification. The recommendation algorithm gives the user a_t objects to choose from. The user selects one of them c_t from Bernoulli from the model of interest mu(a_t) . Based on the user's choice, the algorithm changes its internal state w_t and gives the next set of objects to the user. On an infinite horizon, you need to maximize the total reward sum c_t. Find the conditions for the existence of an unlimited growth of user interest in the proposed objects in a recommender system with the Thomson Sampling (TS) MAB algorithm under conditions of noisy user choice c_t. Without noise, it is known that there is always unlimited growth (in the model) [1].
  • Data: are created as part of the experiment (simulation model) by analogy with the article [1], external data is not required.
  • References:
    1. Jiang, R., Chiappa, S., Lattimore, T., György, A. and Kohli, P., 2019, January. Degenerate feedback loops in recommender systems. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 383-390).
    2. Khritankov, A. (2021). Hidden Feedback Loops in Machine Learning Systems: A Simulation Model and Preliminary Results. In International Conference on Software Quality (pp. 54-65). Springer, Cham.
    3. Khritankov A. (2021). Hidden feedback loop experiment demo. https://github.com/prog-autom/hidden-demo
  • Basic algorithm: The initial mathematical model of the phenomenon under study is described in the article [1]. The method of experimental research is in the article [2]. The base source code is available at [3]
  • Solution: It is necessary to derive conditions for the existence of positive feedback for the Thomson Sampling Multi-armed Bandit algorithm based on the known theoretical properties of this algorithm. Then check their performance in the simulation model. For verification, a series of experiments is performed with the study of parameter ranges and the estimation of the error (variance) of the simulation. The results are compared with the previously constructed mathematical model of the effect. There is an implementation of the experiment system that can be improved for this task.
  • Novelty: The studied positive feedback effect is observed in real and model systems and is described in many publications as an undesirable phenomenon. There is his model for the limited case of the absence of noise in the user's actions, which is not implemented in practice. Under the proposed conditions, Task has not previously been posed and not solved for recommender systems. For the regression problem, the solution is known.
  • Authors: Expert, consultant - Anton Khritankov

Task 77

  • Name: Determining the phase and disorder of human movement by signals from wearable devices
  • Task: A wide class of periodic movements of a person or an animal is investigated. It is required to find the beginning and end of the movement. It is required to understand when one type of movement ends and another begins. For this, the Task of segmentation of time series is solved. The phase trajectory of one movement is constructed and its actual dimension is found. The purpose of the work is to describe a method for finding the minimum dimension of the phase space. By repetition of the phase, segment the periodic actions of a person. It is also necessary to propose a method for extracting the zero phase in a given space for a specific action. Bonus: find the discord in the phase trajectory and indicate the change in the type of movement. Bonus 2: do this for different phone positions by proposing invariant transformation models.
  • Data: The data consists of time series read from a three-axis accelerometer with an explicit periodic class (walking, running, walking up and down stairs, etc.). It is possible to get your own data from a mobile device, or get model data from the dataset UCI HAR
  • References:
    1. A. P. Motrenko, V. V. Strijov. Extracting fundamental periods to segment biomedical signals // Journal of Biomedical and Health Informatics, 2015, 20(6).P. 1466–1476
1.(Сегментация временных рядов с периодическими действиями: решалась Task сегментации с использованием фазового пространства фиксированной размерности.) PDFURL
    2. A.D. Ignatov, V. V. Strijov. Human activity recognition using quasi-periodic time series collected from a single triaxial accelerometer. // Multimedia Tools and Applications, 2015, P. 1–14.
( Классификация человеческой активности с помощью сегментации временных рядов
: исследовались классификаторы над получаемыми сегментами.) PDFURL
    3. Grabovoy, A.V., Strijov, V.V. Quasi-Periodic Time Series Clustering for Human Activity Recognition. Lobachevskii J Math 41, 333–339 (2020). (Сегментация временных рядов на квазипериодические сегменты
: исследовались методы сегментации с использованием анализа главных компонент and перехода в фазовое пространство.) Text Slides DOI
  • Basic algorithm: The basic algorithm is described in 1 and 3 works, code here, work code 3 author.
  • Solution: It is proposed to consider various dimensionality reduction algorithms and compare different spaces in which the phase trajectory is constructed. Develop an algorithm for finding the minimum dimension of the phase space in which the phase trajectory has no self-intersections up to the standard deviation of the reconstructed trajectory.
  • Novelty: In Motrenko's article, the space dimension is equal to two. This shortcoming must be corrected. The phase trajectory must not intersect itself. And if we can distinguish one type of movement from another within one period (switched from running to a step and realized this within one and a half steps), it will be great.
  • Authors: 
consultants: Kormakov G.V., Tikhonov D.M., Expert Strizhov V.V.

Task 78

  • Name: Importance Sampling for Scenario Approximation of Chance Constrained Optimization
  • Task: Optimization problems with probabilistic constraints are often encountered in engineering practice. For example, the Task of minimizing energy generation in energy networks, with (randomly fluctuating) renewable energy sources. In this case, it is necessary to comply with safety restrictions: voltages at generators and consumers, as well as currents on the lines, must be less than certain thresholds. However, even in the simplest situations, the Task cannot be resolved exactly. The best-known approach is the chance constrained optimization methods, which often give a good approximation. An alternative approach is sampling the network operation modes and solving the problem on the data set of the classification problem: separating bad modes from good ones with a given error of the second kind. At the same time, for a sufficiently accurate solution, a very large amount of data is required, which often makes the problem numerically inefficient. We suggest using “importance sampling” to reduce the number of scenarios. Importance sampling consists of substituting a sample from a nominal solution, which often carries no information since all bad events are very rare, with a synthetic distribution that samples the sample in a neighborhood of bad events.            
  • Problem statement: find the minimum of a convex function (price) under probabilistic constraints (the probability of exceeding a certain threshold for a system of linear/quadratic functions is small) and numerically show the effectiveness of sampling in this problem.
  • Data: Data is available in the pypower and matpower packages as csv files.
  • References: The proposed algorithms are based on 3 articles:
    1. Owen, Maximov, Chertkov. Importance Sampling for the Union of Rare Events with Applications to Power Systems LINK
    2. A. Nemirovski. On safe tractable approximations of chance constraints [1]
    3. S. Tong, A. Subramanyam, and Vi. Rao. Optimization under rare chance constraints. LINK
    4. In addition, the authors of the problem have a draft of the article, in which you need to add a numerical part.
  • Basic algorithm: A list of basic algorithms is provided in this lecture LINK
  • Solution: in numerical experiments, you need to compare the sample size requirements for standard methods (scenario approximation) and using importance sampling to obtain a solution of comparable quality (and inverse Task, having equal sample lengths, compare the quality of the solution)
  • Novelty: Task has long been known in the community and scenario approximation is one of the main methods. At the same time, importance sampling helps to significantly reduce the number of scenarios. We have recently received a number of interesting results on how to calculate optimal samplers, with their use the complexity of the problem will be significantly reduced
  • Authors: Expert – Yuri Maksimov, consultant – Yuri Maksimov and Alexander Lukashevich, student.

Task 79

  • Name: Improving Bayesian Inference in Physics Informed Machine Learning
  • Task: Machine learning methods are currently widely used in physics, in particular, in solving turbulence problems or analyzing the stability of physical networks. At the same time, the key issue is which modes to choose for training models. A frequent choice is a sequence of points that uniformly covers the admissible set. However, often such sequences are not very informative, especially if analytical methods give a region where the system is guaranteed to be stable. The problem proposes several methods of sampling: allowing to take into account this information. Our goal is to compare them and find the one that requires the smallest sample size (empirical comparison).
  • Data: The experiment is proposed to be carried out on model and real data. The simulation experiment consists in analyzing the stability of (slightly non-linear) differential equations (synthetic data is self-generated). The second experiment is to analyze the stability of energy systems (data from matpower, pypower, GridDyn).
  • References:
    1. Art Owen. Quasi Monte Carlo Sampling. LINK 
    2. Jian Cheng & Marek J. Druzdzel. Computational Investigation of Low-Discrepancy Sequences in Simulation Algorithms for Bayesian Networks [2]
    3. A. Owen, Y Maximov, M. Chertkov. Importance Sampling for the Union of Rare Events with Applications to Power Systems [3]
    4. Polson and Solokov. Deep Learning: A Bayesian Perspective [4]
    5. In addition: the authors of the problem have a draft work on this topic
  • Basic algorithm: The basic algorithm we are improving is Quasi Monte Carlo (QMC, LINK ). Task to construct low discrepancy sequences not covering the polyhedral region and the region given by the intersection of the quadratic constraints. Another algorithm with which we need a comparison:

E. Gryazina, B. Polyak. Random Sampling: a Billiard Walk Algorithm LINK и с алгоритмами типа Hit and Run [5]

  • Solution: sampling methods by importance, in particular the extension of the approach (Boy, Ryi, 2014) and (Owen, Maximov, Chertkov, 2017) and their applications to ML/DL for physical problems
  • Novelty: in a significant reduction in sample complexity and the explicit use of existing and analytical results and learning to solve physical problems, before that ML approaches and analytical solutions were mostly parallel courses
  • Authors: Expert Yuri Maksimov, consultant Yuri Maksimov and Alexander Lukashevich, student.

 

Task 81

  • Name: NAS — Generation and selection of neural network architectures
  • Task: The task of choosing the optimal neural network architecture is set as the Task of sampling the vector of structural parameters. The optimality criterion is defined in terms of the accuracy, complexity and stability of the model. The sampling procedure itself consists of two steps: generating a new structure and rejecting this structure if it does not satisfy the optimality criterion. It is proposed to explore various methods of sampling. The formulation of the problem of choosing the optimal structure is described in Potanin-1
  • Data: : Two separate sets are offered as data. The first one consists of one element, this is the popular MNIST dataset. Pros - is a strong and generally accepted baseline, was used as a benchmark for the WANN article, quite large (multi-class classification). The second set is a set of datasets for the regression task. Size varies from very small to quite large. Here is a link to the dataset and laptop to download the data data.
  • References:
    1. Potanin - 1
    2. Potanin - 2. One more work, the text is given to the interested student, but without publication.
    3. Strizhov Factory laboratory Error function
    4. Informtica
    5. WANN
    6. DARTS
    7. Symbols
    8. NEAT
  • Basic algorithm: Closest project, and its code. Actual code from consultant.
  • Solution: A number of experiments have already been performed, where sampling is performed by a genetic algorithm. Acceptable results have been obtained. It is proposed to analyze and improve them. Namely, to distinguish two modules: generation and deviation and compare several types of sampling. Basic - Importance sampling, desirable - Metropolis-Hastings (or even Metropolis-Langevin) sampling. Since the genetic algorithm is considered by us as a process with jumps, it is proposed to take this into account when designing the sampling procedure. The bonus of MH is that it has a Bayesian interpretation. The first level of Bayesian inference as applied to MH is described in [Informatica]. It is required either to rewrite it in terms of the distribution of structural parameters, or to describe both levels in general, moving the structural parameters to the second level (by the way, approximately the same will be in the Aduenko problem).
  • Novelty: Neural networks excel at the tasks of computer vision, reinforcement learning, and natural language processing. One of the main goals of neural networks is to perform well tasks that are currently solved exclusively by humans, that is, natural human neural networks. Artificial neural networks still work very differently from natural neural networks. One of the main differences is that natural neural networks evolve over time, changing the strength of connections and their architecture. Artificial neural networks can adjust the strength of connections using weights, but cannot change their architecture. Therefore, the task of choosing the optimal structures of neural networks for specific tasks seems to be an important step in the development of the capabilities of neural network models.
  • Authors: consultant Mark Potanin, Expert Strizhov V.V.

Task 82

  • Name: Training with an Expert for a sample with many domains.
  • Task: The Task of approximating a multi-domain sample by a single multi-model - a mixture of Experts is considered. As data, it is supposed to use a sample that contains several domains. There is no domain label for each object. Each domain is approximated by a local model. The paper considers a two-stage Task optimization based on the EM algorithm.
  • Data: Samples of reviews from the Amazon site for different types of goods are used as data. It is supposed to use a linear model as a local model, and use tf-idf vectors within each domain as an indicative description of reviews.
  • References:
    1. https://arxiv.org/pdf/1806.00258.pdf
    2. http://www.mysmu.edu/faculty/jingjiang/papers/da_survey.pdf
    3. https://dl.acm.org/doi/pdf/10.1145/3400066
  • Basic algorithm and Solution: The basic solution is presented here. The work uses the expert mixture method for the Multi-Soruce domain adaptation problem. The code for the article is available link.
  • Novelty: At the moment, in machine learning there are more and more tasks related to data that are taken from different sources. In this case, there are samples that consist of a large number of domains. At the moment, there is no complete theoretical justification for constructing mixtures of local models for approximating such types of samples.
  • Authors: Grabovoi A.V., Strizhov V.V.

Task 17

  • Name: BCI: Selection of consistent models for building a neural interface
  • Task: When building brain-computer interface systems, simple, stable models are used. An important step in building an interface is such a model is an adequate choice of model. A wide range of models is considered: linear, simple neural networks, recurrent networks, transformers. The peculiarity of the problem is that when making a prediction, it is required to model not only the initial signal taken from the cerebral cortex, but also the target signal taken from the limbs. Thus, two models are required. In order for them to work together, a space of agreements is being built. It is proposed to explore the properties of this space and the properties of the resulting forecast (neural interface) on various pairs of models.
  • Data: ECoG/EEG brain signal data sets.
    1. Need ECoG (dataset 25 contains EEG, EOG and hand movements) http://bnci-horizon-2020.eu/database/data-sets
    2. neyrotycho — our old data.
  • References::
    1. Yaushev F.Yu., Isachenko R.V., Strizhov V.V. Latent space matching models in the forecasting problem // Systems and Means of Informatics, 2021, 31(1). PDF
    2. Isachenko R.V. Choice of a signal decoding model in high-dimensional spaces. Manuscript, 2021. PDF
    3. Isachenko R.V. Choice of a signal decoding model in high-dimensional spaces. Slides, 2020. [6]
    4. Isachenko R.V., Vladimirova M.R., Strijov V.V. Dimensionality reduction for time series decoding and forecasting problems // DEStech Transactions on Computer Science and Engineering, 2018, 27349 : 286-296. PDF
    5. Isachenko R.V., Strijov V.V. Quadratic Programming Optimization with Feature Selection for Non-linear Models // Lobachevskii Journal of Mathematics, 2018, 39(9) : 1179-1187. PDF
    6. Motrenko A.P., Strijov V.V. Multi-way feature selection for ECoG-based brain-computer interface // Expert Systems with Applications, 2018, 114(30) : 402-413. PDF
    7. Eliseyev A., Aksenova T. Stable and artifact-resistant decoding of 3D hand trajectories from ECoG signals using the generalized additive model //Journal of neural engineering. – 2014.
  • Basic algorithm: Described in the first work. The code is available. In that work, the data is two parts of an image. In our work, the signal of the brain and the movement of the hands. SuperTask: to finish the first job. Also the code and works here.
  • Solution: The case is considered when the initial data are heterogeneous: the spaces of the independent and target variables are of different nature. It is required to build a predictive model that would take into account the dependence in the source space of the independent variable, as well as in the space of the target variable. It is proposed to investigate the accuracy, complexity and stability of pairs of various models. Since the inverse Task is solved when building a forecast, it is required to build inverse transformations for each model. To do this, you can use both basic techniques (PLS) and streams.
  • Novelty: Analysis of the prediction and latent space obtained by a pair of heterogeneous models.
  • Authors: Consultant Roman Isachenko, Expert Strizhov V.V.

Task 69

  • Name: Graph Neural Network in Reaction Yield prediction
  • Task: There are disconnected graphs of source molecules and products in a chemical reaction. The yield of the main product in the reaction is known. It is required to design an algorithm that predicts yield by solving the regression task on given disconnected graphs.
  • Data: Database of reaction from US patents [7]
  • References::
    • [8] A general overview.
    • [9] Relational Graph Convolution Neural Network
    • [10] Transformer architecture
    • [11] Graph neural network learning for chemical compounds synthesis
  • Basic algorithm: Transformer model. The input sequence is a SMILES representation of the source and product molecules.
  • Solution: A pipeline for working with disconnected graphs is proposed. The pipeline includes the construction of extended graph with molecule and reaction representation, Relational Graph Convolution Neural Network, Encoder of Transformer. The method is applied to solve yield predictions.
  • Novelty: A solution for regression problem on the given disconnected graph is constructed; the approach demonstrates better performance compared with other solutions
  • Authors:: Nikitin Filipp, Isayev Olexandr, Strizhov V.V.

Task 84

  • Name: Trajectory Regularization of Deep Learning Model Parameters Optimization Based on Knowledge Distillation
  • Task: The problem of optimizing the parameters of a deep learning model is considered. The case is considered when the responses of a more complex model (teacher model) are available during optimization. The classical approach to solving such a problem is learning based on the responses of a complex model (knowledge distillation). Assignment of hyperparameters is made empirically based on the results of the model on delayed sampling. In this paper, we propose to consider a modification of the approach to knowledge distillation, in which the coefficient of significance of the distilling term, as well as its gradients, act as hyperparameters. Both of these groups of parameters allow you to adjust the optimization of the model parameters. To optimize hyperparameters, it is proposed to consider the optimization problem as a two-level optimization problem, where at the first level of optimization the Task of optimizing the model parameters is solved, and at the second level the Task of optimizing hyperparameters is approximately solved by the value of the loss function on the delayed sample.
  • Data: Sampling of CIFAR-10 images
  • References:
    1. Distillation of knowledge
    2. Hyperparameter Optimization in a Bilevel Problem: Greedy Method
    3. Hyperparameter Optimization in a Bilevel Problem: Comparison of Approaches
    4. Meta Optimization: neural network instead of optimization operator
  • Basic algorithm: Model optimization without distillation and with standard distillation approach
  • Solution: Using a two-level problem for model optimization. The combination of gradients for both terms is processed by a separate model (LSTM)
  • Novelty: A new approach to model distillation will be proposed to significantly improve the performance of models trained in privileged information mode. It is also planned to study the dynamics of changes in hyperparameters in the optimization process.
  • Authors: Oleg Bakhteev, Strizhov V.V.

Task 85

  • Name: A differentiable search algorithm for model architecture with control over its complexity
  • Task: The problem of choosing the structure of a deep learning model with a predetermined complexity is considered. It is required to propose a method for searching for a model that allows controlling its complexity with low computational costs.
  • Data: MNIST, CIFAR
  • References:
    1. Grebenkova O.S., Oleg Bakhteev, Strizhov V.V.Variational optimization of a deep learning model with complexity control // Informatics and its applications, 2021, 15(2). PDF
    2. DARTS
    3. hypernets
  • Basic algorithm: DARTS
  • Solution: The proposed method is to use a differentiable neural network architecture search algorithm (DARTS) with parameter complexity control using a hypernet.
  • Novelty: The proposed method allows you to control the complexity of the model, in the process of searching for an architecture without additional heuristics.
  • Authors: Oleg Bakhteev, Grebenkova O. S.

Task 86

  • Name: Learning co-evolution information with natural language processing for protein folding problem
  • Task: One of the most essential problems in structural bioinformatics is protein fold recognition since the relationship between the protein amino acid sequence and its tertiary structure is revealed by protein folding. A specific protein fold describes the distinctive arrangement of secondary structure elements in the nearly-infinite conformation space, which denotes the structural characteristics of a protein molecule.
  • Problem description:: request
  • Authors: Sergei Grudinin, Maria Kadukova.

Task 87

  • Name: Bayesian choice of structures of generalized linear models
  • Task: The work is devoted to testing methods for feature selection. It is assumed that the sample under study contains a significant number of multicollinear features. Multicollinearity is a strong correlation between the features selected for analysis that jointly affect the target vector, which makes it difficult to estimate regression parameters and identify the relationship between features and the target vector. There is a set of time series containing the readings of various sensors that reflect the state of the device. The readings of the sensors correlate with each other. It is necessary to choose the optimal set of features for solving the forecasting problem.
  • Novelty: One of the most preferred feature selection algorithms has been published. It uses structural parameters. But there is no theoretical justification. It is proposed to build a theory by describing and analyzing various functions of a priori distribution of structural parameters. In works on the search for structures of neural networks, there is also no clear theory and a list of a priori assumptions.
  • Data: Multivariate time series with readings from various sensors from paper 4, for starters, all samples from paper 1.
  • References: Keywords: bootstrap aggregation, Belsley method, vector autoregression.
    1. Katrutsa A.M., Strijov V.V. Comprehensive study of feature selection methods to solve multicollinearity problem according to evaluation criteria // Expert Systems with Applications, 2017, 76 : 1-11. PDF
    2. Katrutsa A.M., Strijov V.V. Stresstest procedure for feature selection algorithms // Chemometrics and Intelligent Laboratory Systems, 2015, 142 : 172-183.  PDF
    3. Strizhov V.V. Error function in regression recovery problems // Factory laboratory. material diagnostics, 2013, 79(5) : 65-73. PDF
    4. Зайцев А.А., Strizhov V.V., Tokmakova A.A. Estimation of hyperparameters of regression models by the maximum likelihood method // Information technologies, 2013, 2 : 11-15. PDF
    5. Kuznetsov M.P., Tokmakova A.A., Strijov V.V. Analytic and stochastic methods of structure parameter estimation // Informatica, 2016, 27(3) : 607-624. PDF
    6. Катруца А.М., Strizhov V.V. The problem of multicollinearity in the selection of features in regression problems // Information technologies, 2015, 1 : 8-18.  PDF
    7. Нейчев Р.Г., Катруца А.М., Strizhov V.V. Selection of the optimal set of features from a multicorrelated set in the forecasting problem. Zavodskaya Lab. material diagnostics, 2016, 82(3) : 68-74. PDF
  • Basic algorithm: Described in Reference 1: Quadratic Programming for QPFS Feature Selection. Code from Roman Isachenko.
  • Solution: It is proposed to consider the structural parameters used in QPFS at the second level of Bayesian inference. Introduce informative a priori distributions of parameters and structural parameters. Compare different a priori assumptions.
  • Novelty: Statistical Analysis of Structural Parameter Space and Visualization
  • Authors: Alexander Aduenko — consultant, Strizhov V.V.

Task 88

  • Name: Search for the boundaries of the iris by the method of circular projections
  • Task: Given a monochrome bitmap of the eye, см. examples. The approximate position of the center of the pupil is also known. The word "approximate" means that the calculated center of the pupil is no more than half of its true radius from the true one. It is necessary to determine the approximate positions of the circles approximating the pupil and iris. The algorithm must be very fast.
  • Data: About 200 thousand eye images. For each, the position of the true circles is marked - for the purpose of training and testing the method being created.
  • Basic algorithm: To speed up work with the image, it is proposed to aggregate data using circular projections of brightness. Circular projection is a function that depends on the radius, the value of which P(r) is equal to the integral of the directed image brightness gradient over a circle of radius r (or along an arc of a circle). Example for one arc (right quadrant) and for four arcs. Having built some circular projections, based on them, you can try to determine the position of the inner and outer borders of the iris (ring) using heuristics and / or a neural network. It is interesting to evaluate the capabilities of the neural network in this task.
  • References: Matveev I.A. Detection of Iris in Image By Interrelated Maxima of Brightness Gradient Projections // Applied and Computational Mathematics. 2010. V.9. N.2. P.252-257 PDF
  • Author: Matveev I.A.

Task 53

  • Name: Solution of an optimization problem combining classification and regression to estimate the binding energy of a protein and small molecules.
  • Task: The goal of the problem is to solve an optimization problem with classification and regression loss functions applied to biological data.
  • Data: Approximately 12,000 complexes of proteins with small molecules. For classification, for each of them there is 1 correct position in space and 18 incorrect ones generated, for regression, each complex corresponds to the value of the binding constant (proportional to energy). The main descriptors are histograms of distributions of distances between different atoms.
  • References::
  • Basic algorithm: In the classification task, we used an algorithm similar to linear SVM, whose relationship with the energy estimate, which is outside the scope of the classification task, is described in the article https://hal.inria.fr/hal-01591154/. For MSE, there is already a formulated dual Task as a regression loss function, with the implementation of which we can start.
  • Solution: The first step is to solve the problem with the MSE in the loss function using a solver that is convenient for you. The main difficulty may be the large dimensionality of the data, but they are sparse. Further it will be possible to change the wording of the problem.
  • Novelty: Many models used to predict the interactions of proteins with ligands are "retrained" for some task. For example, models that are good at predicting binding energies may be poor at selecting a protein-binding molecule from a variety of non-binding ones, and models that are good at determining the correct geometry of the complex may be poor at predicting energies. In this problem, we propose to consider a new approach to combat such overfitting, since the combination of classification and regression loss functions seems to us to be a very natural regularization.
  • Authors: Sergei Grudinin, Maria Kadukova.

Task 75

  • Name: Alignment of image elements using metric models.
  • Task: Character set specified. Each symbol is represented by one file - an image. Image pixel size may vary. All images are known to belong to the same class, such as faces, letters, flowers, or cars. (A more complicated option is to one class, which we are studying and noise classes.) It is known that each image can be combined with another with the help of an equalizing transformation up to noise, or up to some average image. (This image may or may not be present in the sample). This leveling transformation is specified in the base case by a neural network, and in the proposed case - by a parametric transformation from some given class (the first is a special case of the second). The aligned image is compared with the original one using the distance function. If the distance between two images is statistically significant, it is concluded that the images belong to the same class. It is required to 1) propose an adequate model of the alignment transformation that takes into account the assumptions about the nature of the image (for example, only rotation and proportional scaling), 2) propose a distance function, 3) propose a method for finding the average image.
  • Data: Synthetic and real 1) pictures - faces and symbols with rotation and stretch transformation, 2) faces and cars with 3D rotation transformation with 2D projection. Synthetic images are proposed to be created manually using 1) photographs of a sheet of paper, 2) photographs of the surface of the drawing on a balloon.
  • References:
    1. support work - alignment of images using 2D DTW,
    2. support work - alignment of images using neural networks,
    3. DTW alignment work in 2D,
    4. parametric alignment work.
  • Basic algorithm: from work 1.
  • Solution: In the attached file pdf.
  • Novelty: Instead of multidimensional image alignment, parametric alignment is proposed.
  • Authors: Alexey Goncharov, Strizhov V.V.

Task 80

  • Name: Detection of correlations between activity in social networks and capitalization of companies
  • Task: At present, the significant impact on stock quotes, company capitalization and the success or failure of an IPO depends on social factors such as public opinion expressed on social media. A recent notable example is the change in GameStore quotes caused by the surge in activity on Reddit. Our task at the first stage is to identify quotes between the shares of companies in different segments and activity in social networks. That is, it is necessary to identify correlations between significant changes in the company's capitalization and previous bursts (positive or negative) of its discussion in social networks. That is, it is necessary to find the minimum of the loss function when restoring the dependence in various classes of models (parametrics, neural networks, etc.). This Task is part of a large project to analyze the analysis of markets and the impact of social factors on risks (within a team of 5-7 professors), which will lead to a series of publications sufficient to defend a dissertation.
  • Data: Task has a significant engineering context, the data is downloads from quotes on the Moscow Exchange, as well as NYT and reddit data (crawling and parsing is done by standard tools). The student working on this task must have strong engineering skills and a desire to engage in both the practice of machine learning and the engineering parts of the task.
  • References:
    1. Paul S. Adler and Seok-Woo Kwon. Social Capital: Prospects for a new Concept. [12]   
    2. Kim and Hastak. Social network analysis: Characteristics of online social networks after a disaster LINK
    3. Baumgartner, Jason, et al. "The pushshift reddit dataset." Proceedings of the International AAAI Conference on Web and Social Media. Vol. 14. 2020. [13]
  • Basic algorithm: The basic algorithms are LSTM and Graph neural networks.
  • Solution: Let's start by using LSTM, then try some of its standard extensions
  • Novelty: In this area, there are a lot of economic, model solutions, but the accuracy of these solutions is not always high. The use of modern ML/DL models is expected to significantly improve the quality of the solution.
  • Authors: Expert Yuri Maksimov, consultant Yuri Maksimov, student.

Task 88b

  • Name: Finding a Pupil in an Eye Image Using the Luminance Projection Method
  • Task: Given a monochrome bitmap of the eye, examples. It is necessary to determine the approximate coordinates of the center of the pupil. The word "approximate" means that the calculated pupil center must lie inside a circle centered at the pupil's true center and half the true radius. The algorithm must be very fast.
  • Data: About 200 thousand eye images. For each, the position of the true circle is marked - for the purpose of training and testing the method being created.

Basic algorithm: To speed up work with the image, it is proposed to aggregate data using brightness projections. Image brightness is a function of two discrete arguments. Its projection on the horizontal axis is equal to. Similarly, projections are constructed on axes with an inclination. Having built several projections (two, four), based on them, you can try to determine the position of the pupil (compact dark area) using heuristics and / or a neural network. It is interesting to evaluate the capabilities of the neural network in this task.

  • References: Zhi-Hua Zhou, Xin Geng Projection functions for eye detection // Pattern Recognition. 2004. V.37ю N.5. P.1049-1056. PDF
  • Author: Matveev I.A.

Task 88c

  • Name: Searching for a century in an image as a parabolic contour using the projection method.
  • Task: Given a monochrome bitmap of the eye, examples. It is necessary to find the contour of the upper eyelid as a parabola, that is, to determine the parameters.
  • Data: About 200 thousand eye images. For some (about 2500), a human expert marked the position of a parabola that approximates the eyelid.
  • Basic algorithm: The first step is pre-processing the image with a vertical gradient filter with further binarization, below is a typical result. There are various options for the next step. For example, if the coordinates of the pupil are known, you can set the region of interest (from above) and in it, using the selected points, construct a parabola by approximation using the least squares method. An example result is given below. More subtle methods are possible, such as finding a parabola using the Hough transform (see Wikipedia). Another way is to use projective methods (Radon transform). The main idea: after specifying the coefficient , apply a coordinate transformation to the image, as a result of which all parabolas of the form formula turn into lines of the form , then, given the coefficient , apply the coordinate transformation where , after which the oblique lines of the formula form become horizontal, which are easy to determine, for example, by horizontal projection (by summing the values in the rows of the matrix of the resulting image. If the coefficients are guessed correctly, the perabola representing the eyelid will give a clear maximum in the projection. By going through the formula (having a physical meaning), you can find those that give the maximum projection value, and consider that the desired parabola - eyelid.
  • References: Wikipedia, articles "Hough Transform", "Radon Transform".
  • Author: Matveev I.A.

Task 62

  • Name: Construction of a method for dynamic alignment of multidimensional time series, resistant to local signal fluctuations.
  • Task: In the process of working with multidimensional time series, the situation of close proximity of sensors corresponding to different measurement channels is common. As a result, small signal shifts in space can lead to signal peak fixation by neighboring sensors, which leads to significant differences in measurements in terms of L2 distance.
    Thus, small signal shifts lead to significant fluctuations in the readings of the sensors. The Task of constructing a distance function between points of time series that is resistant to noise generated by small spatial signal shifts is considered. It is necessary to consider the problem in the approximation of the presence of a map of the location of the sensors.
  • Data:
  • References::
  • Basic algorithm: L2 distance between a pair of measurements.
  • Solution: Use the DTW distance function between two multidimensional time series. Two time axes are aligned, while inside the DTW functional, the distance between the i-th and j-th measurements is chosen such that it is resistant to local “shifts” of the signal. It is required to offer such functionality. The basic solution is L2, the improved solution is DTW between the i-th and j-th dimensions (dtw inside dtw).
    You can suggest some modification, for example, the distance between the hidden layers of the autoencoder for points i and j.
  • Novelty: A method for aligning multidimensional time series is proposed that takes into account small signal fluctuations in space.
  • Authors: Expert - Strizhov V.V., consultants - Gleb Morgachev, Alexey Goncharov.

Task 58

  • Name: Transformation of the Gerchberg-Saxton algorithm using Bayesian neural networks. (or Neural network approach in the problem of phase search for images from the European synchrotron)
  • Task: The aim of the project is to improve the quality of resolution of images of nanosized objects obtained in the laboratories of the European Synchrotron Radiation Foundation.
  • Data: Contact an advisor for data (3GB).

References::

  • Basic algorithm: The transition from direct space to reciprocal space occurs using the Fourier transform. The Fourier transform is a linear transformation. Therefore, it is proposed to approximate it with a neural network. For example, an autoencoder for modeling forward and inverse Fourier transforms.
  • Solution: Transformation of the Gerchberg-Saxton algorithm using Bayesian neural networks. Use of information on physical limitations and expertise.
  • Novelty: Use of information about physical constraints and expert knowledge in the construction of the error function.
  • Authors:: Experts Sergei Grudinin, Yuri Chushkin, Strizhov V.V., consultant Mark Potanin

Task 63

  • Name: Hierarchical alignment of time sequences.
  • Task: Task of alignment of sequences of difficult events is considered. An example is the complex behavior of a person: when considering data from IMU sensors, one can put forward a hypothesis: there is an initial signal, there are aggregates of “elementary actions” and there are aggregates of “actions” of a person. Each of the indicated levels of abstraction can be distinguished and operated on exactly by it.
    In order to accurately recognize the sequence of actions, it is possible to use metric methods (for example, DTW, as a method that is resistant to time shifts). For a more accurate quality of timeline alignment, it is possible to carry out alignment at different levels of abstraction.
    It is proposed to explore such a hierarchical approach to sequence alignment, based on the possibility of applying alignment algorithms to objects of different structures, having a distance function on them.
  • References:
  • Basic algorithm: classic DTW.
  • Solution: It is proposed to perform the transition from one level of abstraction to another by using convolutional and recurrent neural networks. Then the object at the lower level of abstraction is the original signal. At the second level - a signal from the hidden layer of the model (built on the objects of the lower level), the dimension of which is much less, and the upper layer - a signal from the hidden layer of the model (built on the objects of the middle level).
    In this case, DTW is calculated separately between the lower , between the middle and between the upper levels, but the formation of objects for calculating the distance is carried out taking into account the alignment path between the objects of the previous level.
    This method is considered as a way to increase the interpretability of the alignment procedure and the accuracy of the action classification in connection with the transition to higher-level patterns. In addition, a significant increase in speed is expected.
  • Novelty: The idea of aligning time sequences simultaneously at several levels of abstraction is proposed. The method should significantly improve the interpretability of alignment algorithms and increase their speed.
  • Authors: Strizhov V.V. - Expert, Gleb Morgachev, Alexey Goncharov - consultants.

Task 57

  • Name:Additive Regularization and in the Tasks of Privileged Learning in Solving the Problem of Predicting the State of the Ocean
  • Task: There is a sample of data from ocean buoys, it is required to predict the state of the ocean at different points in time.
  • Data: The buoys provide data on wave height, wind speed, wind direction, wave period, sea level pressure, air temperature and sea surface temperature with a resolution of 10 minutes to 1 hour.
  • References:
  • Basic algorithm: Using a simple neural network.
  • Solution:Adding to the basic algorithm (a simple neural network) a system of differential equations. Explore the properties of the parameter space of teacher and student according to the preferred approach.
  • Novelty: Investigation of the parameter space of the teacher and the student and their change. It is possible to set up separate teacher and student models and track the change in their parameters in the optimization process - variance, change in the quality of the student when adding teacher information, complexity.
  • Authors:: Strizhov V.V., Mark Potanin


Task 52

  • Name: Predicting the quality of protein models using spherical convolutions on 3D graphs.
  • Task: The purpose of this work is to create and study a new convolution operation on three-dimensional graphs in the framework of solving the problem of assessing the quality of three-dimensional protein models (task regression on graph nodes).
  • Data: Models generated by CASP competitors are used (http://predictioncenter.org).
  • References::
    • [19] More about the task.
    • [20] Relational inductive biases, deep learning, and graph networks.
    • [21] Geometric deep learning: going beyond euclidean data.
  • Basic algorithm: As a basic algorithm, we will use a neural network based on the graph convolution method, which is generally described in [22].
  • Solution: The presence of a peptide chain in proteins makes it possible to uniquely introduce local coordinate systems for all graph nodes, which makes it possible to create and apply spherical filters regardless of the graph topology.
  • Novelty: In the general case, graphs are irregular structures, and in many graph learning tasks, the sample objects do not have a single topology. Therefore, the existing operations of convolutions on graphs are greatly simplified or do not generalize to different topologies. In this paper, we propose to consider a new method for constructing a convolution operation on three-dimensional graphs, for which it is possible to uniquely choose local coordinate systems associated with each node.
  • Authors: Sergei Grudinin, Ilya Igashov.

Task 44+

  • Name: Early prediction of sufficient sample size for a generalized linear model.
  • Task: The problem of experiment planning is investigated. The Task of estimating a sufficient sample size according to the data is solved. The sample is assumed to be simple. It is described by an adequate model. Otherwise, the sample is generated by a fixed probabilistic model from a known class of models. The sample size is considered sufficient if the model is restored with sufficient confidence. It is required, knowing the model, to estimate a sufficient sample size at the early stages of data collection.
  • Цель: On a small simple iid sample, predict the error on a replenished large one. The predictive model is smooth monotonic in two derivatives. The choice of model is a complete enumeration or genetics. The model depends on the reduced (explore) covariance matrix of the GLM parameters.
  • Data: For the computational experiment, it is proposed to use classical samples from the UCI repository. Link to selections https://github.com/ttgadaev/SampleSizeEstimation/tree/master/datasets
  • References::
    1. Overview of Methods, Motivation and Problem Statement for Sample Size Estimation
    2. http://svn.code.sf.net/p/mlalgorithms/code/PhDThesis/.
    3. Bootstrap method. https://projecteuclid.org/download/pdf_1/euclid.aos/1.

Bishop, C. 2006. Pattern Recognition and Machine Learning. Berlin: Springer. 758 p.

  • Basic algorithm: We will say that the sample size is sufficient if the log-likelihood has a small variance, on a sample of size m calculated using the bootstrap.

We are trying to approximate the dependence of the average value of log-likelihood and its variance on the sample size.

  • Solution: The methods described in the review are asymptotic or require a deliberately large sample size. The new method should be to predict volume in the early stages of experiment design, i.e. when data is scarce.
  • Authors: consultant - Malinovsky G., Strizhov V.V. (Expert)


Task 12

  • Name: Machine translation training without parallel texts.
  • Task: The Task of building a text translation model without the use of parallel texts is considered, i.e. pairs of identical sentences in different languages. This Task occurs when building translation models for low-resource languages (that is, languages for which there is not much data in the public domain).
  • Data: A selection of articles from Wikipedia in two languages.
  • References::
    • [23] Unsupervised Machine Translation Using Monolingual Corpora Only
    • [24] Sequence to sequence.
    • [25] Autoencoding.
    • [26] Training with Monolingual Training Data.
  • Basic algorithm: Unsupervised Machine Translation Using Monolingual Corpora Only.
  • Solution: As a translation model, it is proposed to consider a combination of two auto-encoders, each of which is responsible for presenting sentences in one of the languages. The models are optimized in such a way that the latent spaces of autoencoders for different languages match. As an initial representation of sentences, it is proposed to consider their graph description obtained using multilingual ontologies.
  • Novelty: A method for constructing a translation model is proposed, taking into account graph descriptions of sentences.
  • Authors: Oleg Bakhteev, Strizhov V.V.,


Task 8

  • Name: Generation of features using locally approximating models (Classification of human activities according to measurements of fitness bracelets).
  • Task: It is required to check the feasibility of the hypothesis about the simplicity of sampling for the generated features. Features are the optimal parameters of approximating models. Moreover, the entire sample is not simple and requires a mixture of models to approximate it. Explore the information content of the generated features - the parameters of the approximating models trained on the segments of the original time series. According to the measurements of the accelerometer and gyroscope, it is required to determine the type of activity of the worker. It is assumed that the time series of measurements contain elementary movements that form clusters in the space of time series descriptions. The characteristic duration of the movement is seconds. Time series are labeled with activity type labels: work, leisure. The typical duration of activity is minutes. It is required to restore the type of activity according to the description of the time series and cluster.
  • Data: WISDM accelerometer time series (Time series (library of examples), section Accelerometry).
    • WISDM (Kwapisz, J.R., G.M. Weiss, and S.A. Moore. 2011. Activity recognition using cell phone accelerometers. ACM SigKDD Explorations Newsletter. 12(2):74–82.), USC-HAD или сложнее. Данные акселерометра (Human activity recognition using smart phone embedded sensors: A Linear Dynamical Systems method, W Wang, H Liu, L Yu, F Sun - Neural Networks (IJCNN), 2014)
  • References::
    • Motrenko A.P., Strijov V.V. Extracting fundamental periods to segment human motion time series // Journal of Biomedical and Health Informatics, 2016, Vol. 20, No. 6, 1466 - 1476. URL
    • Карасиков М.Е., Strizhov V.V. Classification of time series in the space of parameters of generating models // Informatics and its applications, 2016.URL
    • Kuznetsov M.P., Ivkin N.P. Algorithm for Classifying Accelerometer Time Series by Combined Feature Description // Machine Learning and Data Analysis. 2015. T. 1, No. 11. C. 1471 - 1483. URL
    • Isachenko R.V., Strizhov V.V. Metric learning in Taskx multiclass classification of time series // Informatics and its applications, 2016, 10(2) : 48-57. URL
    • Zadayanchuk A.I., Popova M.S., Strizhov V.V. Choosing the optimal model for classifying physical activity based on accelerometer measurements // Information technologies, 2016. URL
    • Ignatov A., Strijov V. Human activity recognition using quasiperiodic time series collected from a single triaxial accelerometer // Multimedia Tools and Applications, 2015, 17.05.2015 : 1-14. URL
  • Basic algorithm: Basic algorithm described in [Karasikov, Strizhov: 2016] and [Kuznetsov, Ivkin: 2014].
  • Solution: It is required to build a set of locally approximating models and choose the most adequate ones. Find the optimal segmentation method and the optimal description of the time series. Construct a metric space of descriptions of elementary motions.
  • Novelty: A standard for building locally approximating models has been created. The connection of two characteristic times of the description of human life, the combined statement of the problem.
  • Authors: Expert - Strizhov V.V., consultants - Alexandra Galtseva, Danil Sayranov.

2020

Author Topic Links Consultant Letters Reviewer
Grebenkova Olga Variational optimization of deep learning models with model complexity control LinkReview

GitHub Paper Slides Video

Oleg Bakhteev AILP+UXBR+HCV+TEDWS Shokorov Vyacheslav

Review

Shokorov Vyacheslav Text recognition based on skeletal representation of thick lines and convolutional networks LinkReview

GitHub Paper Slides Video

Denis Ozherelkov AIL Grebenkova Olga

Review

Filatov Andrey Intention forecasting. Investigation of the properties of local models in the spatial decoding of brain signals LinkReview

GitHub Paper Slides Video

Valery Markin AILPHUXBRCVTEDWS Hristolubov Maxim

Review

Islamov Rustem Analysis of the properties of an ensemble of locally approximating models LinkReview

GitHub Paper Slides Video

Andrey Grabovoi AILPHUXBRCVTEDWS Gunaev Ruslan

Review

Zholobov Vladimir Early prediction of sufficient sample size for a generalized linear model. LinkReview

GitHub Paper Slides Video

Grigory Malinovsky AILPHUXBRCVTEWSF Vayser Kirill

Review

Vayser Kirill Additive regularization and its meta parameters when choosing the structure of deep learning networks LinkReview

GitHub Paper Slides Video

Mark Potanin AILP+HUX+BRCV+TEDWS Zholobov Vladimir

Review

Bishuk Anton Solution of an optimization problem combining classification and regression to estimate the binding energy of a protein and small molecules. LinkReview

GitHub Paper Slides Video

Maria Kadukova AILPHUXBRCVTEDH Filippova Anastasia
Filippova Anastasia Step detection for IMU navigation via deep learning LinkReview

GitHub Paper Slides EnglishPaper Video

Tamaz Gadaev AIL0PUXBRCVSF Bishuk Anton

Review

Savelev Nickolay Distributed optimization under Polyak-Loyasievich conditions LinkReview

GitHub Paper Slides Video

A. N. Beznosikov AILPHUXBRCVTEDWS Khary Alexandra

Review

Khary Alexandra Theoretical validity of the application of metric classification methods using dynamic alignment (DTW) to spatiotemporal objects. LinkReview

GitHub Paper Slides Video

Gleb Morgachev, Alexey Goncharov AILPHUXBRCVTEDCWS Savelev Nickolay

Review

Hristolubov Maxim Generating features using locally approximating models (Classification of human activities by measurements of fitness bracelets) LinkReview

GitHub Paper Slides Video

Alexandra Galtseva, Danil Sayranov AILPH Filatov Andrey

Review

Mamonov Kirill Nonlinear ranking of exploratory information search results. LinkReview

GitHub Paper Slides Video

Maxim Eremeev AILPHU+XBRC+V+TEDHWJSF
Pavlichenko Nikita Predicting the quality of protein models using spherical convolutions on 3D graphs. LinkReview

GitHub Paper Slides Video

Sergei Grudinin, Ilya Igashov AILPUXBRHCVTEDH
Sodikov Mahmud, Skachkov Daniel Agnostic neural networks Code

Paper Slides Video

Radoslav Neichev AILPHUXBRC+VTEDHWJSF Kulagin Petr

Review

Gunaev Ruslan Graph Neural Network in Reaction Yield prediction LinkReview

Github Paper Slides Video

Philip Nikitin AILPUXBRHCVTEDHWSF Islamov Rustem

Review

Yaushev Farukh Investigation of ways to match models by reducing the dimension of space LinkReview

Github Paper Slides Video

Roman Isachenko AILPUXBRHCVTEDHWJS Zholobov Vladimir

Review

Task 51

  • Name: Analysis of the properties of an ensemble of locally approximating models.
  • Task: In this paper, we consider the task of constructing a universal approximator --- a multimodel, which consists of a given finite set of local models. Each local model approximates a connected region in feature space. It is assumed that the set of local models cover the entire space of objects. A convex combination of local models is considered as an aggregating function. As the coefficients of the convex combination, we consider a function depending on the object --- the gate function.
  • Required: To construct an algorithm for optimizing the parameters of local models and parameters of the gate function. It is required to propose a metric in the space of objects, a metric in the space of models.
  • Data:
    1. Synthetically generated data.
    2. Energy consumption forecasting data. It is proposed to use the following models as local models: working day, day off. (Energy Consumption, Turk Electricity Consumption German Spot Price).
  • References::
    1. Overview of methods for estimating sample size
    2. Vorontsov's lectures on compositions
    3. Vorontsov's lectures on compositions
    4. Esen Y.S., Wilson J., Gader P.D. Twenty Years of Mixture of Experts. IEEE Transactions on Neural Networks and Learning Systems. 2012. Issues. 23. No 8. P. 1177-1193.
    5. Pavlov K.V. Selection of multilevel models in Tasks classification, 2012
  • Basic algorithm: As a basic algorithm, it is proposed to use a two-level optimization problem, where local models are optimized at one iteration and at the next iteration, the parameters of the gate function are optimized.
  • Authors: Grabovoi A.V. (consultant), Strizhov V.V. (Expert)

Task 54

It is necessary to determine the approximate coordinates of the center of the pupil. The word "approximate" means that the calculated pupil center must lie inside a circle centered at the pupil's true center and half the true radius. The algorithm must be very fast.

  • Data: About 200 thousand eye images. For each, the position of the true circle is marked - for the purpose of training and testing the method being created.
  • Basic algorithm: To speed up work with the image, it is proposed to aggregate data using brightness projections. Image brightness is a function of two discrete arguments I(x, y). Its projection onto the horizontal axis is P(x)=\sum \limits_y I(x,y). Similarly, projections are constructed on axes with an inclination. Having built several projections (two, four), based on them, you can try to determine the position of the pupil (compact dark area) using heuristics and / or a neural network. It is interesting to evaluate the capabilities of the neural network in this task.
  • References:: Zhi-Hua Zhou, Xin Geng Projection functions for eye detection // Pattern Recognition. 2004. V.37ю N.5. P.1049-1056. https://doi.org/10.1016/j.patcog.2003.09.006
  • Authors: Matveev I.A.

Task 55

  • Name: Search for the boundaries of the iris by the method of circular projections
  • Task: Given a monochrome bitmap of the eye, see examples (https://cloud.mail.ru/public/2DBu/5c6F6e3LC). The approximate position of the center of the pupil is also known. The word "approximate" means that the calculated center of the pupil is no more than half of its true radius from the true one. It is necessary to determine the approximate positions of the circles approximating the pupil and iris. The algorithm must be very fast.
  • Data: About 200 thousand eye images. For each, the position of the true circle is marked - for the purpose of training and testing the method being created.
  • Basic algorithm: To speed up work with the image, it is proposed to aggregate data using circular projections of brightness. Circular projection is a function that depends on the radius, the value of which P(r) is equal to the integral of the directed image brightness gradient over a circle of radius r (or along an arc of a circle). Example for one arc (right quadrant) and for four arcs. Having built some circular projections, based on them, you can try to determine the position of the inner and outer borders of the iris (ring) using heuristics and / or a neural network. It is interesting to evaluate the capabilities of the neural network in this task.
  • References:: Matveev I.A. Detection of Iris in Image By Interrelated Maxima of Brightness Gradient Projections // Applied and Computational Mathematics. 2010. V.9. N.2. P.252-257. https://www.researchgate.net/publication/228396639_Detection_of_iris_in_image_by_interrelated_maxima_of_brightness_gradient_projections
  • Authors: Matveev I.A.

Task 56

  • Name: Construction of local and universal interpretable scoring models
  • Task: Build a simple and interpretable scoring system as a superposition of local models, taking into account the requirements for the system to retain knowledge about key customers and features (in other words, take into account new economic phenomena). The model must be a superposition, and each element must be controlled by its own quality criterion. Introduce a schedule for optimizing the structure and parameters of the model: the system must work in a single optimization chain. Propose an algorithm for selecting features and objects.
  • Data:
  1. Data from OTP Bank. The sample contains records of 15,223 clients classified into two classes: 1 - there was a response (1812 clients), 0 - there was no response (13411 clients). Feature descriptions of clients consist of 50 features, which include, in particular, age, gender, social status in relation to work, social status in relation to pension, number of children, number of dependents, education, marital status, branch of work. The data are available at the following addresses: www.machinelearning.ru/wiki/images/2/26/Contest_MMRO15_OTP.rar (sample A), www.machinelearning.ru/wiki/images/5/52/Contest_MMRO15_OTP_(validation).rar (sample B).
  2. Data from Home Credit: https://www.kaggle.com/c/home-credit-default-risk/data
  • References::
  1. Strijov V.V. Error function in regression analysis // Factory Laboratory, 2013, 79(5) : 65-73
  2. Bishop C. M. Linear models for classification / В кн.: Pattern Recognition and Machine Learning. Под ред.: M. Jordan, J. Kleinberg, B. Scholkopf. – New York: Springer Science+Business Media, 2006, pp--203 – 208
  3. Tokmakova A.A. Obtaining Stable Hyperparameter Estimates for Linear Regression Models // Machine Learning and Data Analysis. — 2011. — № 2. — С. 140-155
  4. S. Scitovski and N. Sarlija. Cluster analysis in retail segmentation for credit scoring // CRORR 5. 2014. 235–245
  5. Goncharov A.V. Building Interpretable Deep Learning Models in the Social Ranking Problem
  • Basic algorithm: Iterative weighted least squares (described in (2))
  • Solution: It is proposed to build a scoring system containing such a preprocessing block as a block for generating metric features. It is proposed to investigate the influence of the non-equivalence of objects on the selection of features for the model, to investigate the joint selection of features and objects when building a model. It is required to implement a schedule for optimizing the model structure using an algorithm based on the analysis of covariance matrices of model hyperparameters. The schedule includes a phased replenishment of the set of features and objects. The feature sample size will be determined by controlling the error variance. The main criterion for the quality of the system: ROC AUC (Gini).
  • Novelty:
  1. The model structure optimization schedule must satisfy the requirement to rebuild the model at any time without losing its characteristics.
  2. Accounting for the unequal value of objects in the selection of features
  • Authors: Pugaeva I.V. (consultant), Strizhov V.V. (Expert)

Task 59

  • Name: Distributed optimization under Polyak-Loyasievich conditions
  • Task: The task is to efficiently solve large systems of nonlinear equations using a network of calculators.
  • Solution: A new method for decentralized distributed solution of systems of nonlinear equations under Polyak-Loyasievich's conditions is proposed. The approach is based on the fact that the distributed optimization problem can be represented as a composite optimization problem (see 2 from the literature), which in turn can be solved by analogs of the similar triangles or sliding method (see 2 from the literature).
  • Basic algorithm: The proposed method is compared with gradient descent and accelerated gradient descent
  • References:
  1. Linear Convergence of Gradient and Proximal-GradientMethods Under the Polyak- Lojasiewicz Condition https://arxiv.org/pdf/1608.04636.pdf
  2. Linear Convergence for Distributed Optimization Under the Polyak-Łojasiewicz Condition https://arxiv.org/pdf/1912.12110.pdf
  3. Optimal Decentralized Distributed Algorithms for Stochastic ConvexOptimization https://arxiv.org/pdf/1911.07363.pdf
  4. Modern numerical optimization methods, universal gradient descent method https://arxiv.org/ftp/arxiv/papers/1711/1711.00394.pdf
  • Novelty: Reduction of a distributed optimization problem to a composite optimization problem and its solution under Polyak-Loyasievich conditions
  • Authors: Expert — А.В. Гасников, consultant — А.Н. Безносиков
  • Comment: it is important to set up a computational experiment in this task, otherwise the task will be poorly compatible with the course.

Task 17

  • Name: Intention forecasting. Investigation of the properties of local models in the spatial decoding of brain signals
  • Task: When building brain-computer interface systems, simple, stable models are used. An important stage in the construction of such a model is the construction of an adequate feature space. Previously, such a Task was solved by extracting features from the frequency characteristics of signals.
  • Data: ECoG/EEG brain signal data sets.
  • References::
    1. Motrenko A.P., Strijov V.V. Multi-way feature selection for ECoG-based brain-computer Interface // Expert systems with applications. - 2018.
    2. Eliseyev A., Aksenova T. Stable and artifact-resistant decoding of 3D hand trajectories from ECoG signals using the generalized additive model //Journal of neural engineering. – 2014.
  • Basic algorithm: The comparison is proposed to be made with the partial least squares algorithm.
  • Solution: In this paper, it is proposed to take into account the spatial dependence between sensors that read data. To do this, it is necessary to locally model the spatial impulse/signal and build a predictive model based on the local description.
  • Novelty: An essentially new way of constructing a feature description in the problem of signal decoding is proposed. Bonus: analysis of changes in the structure of the model, adaptation of the structure when the sample changes.
  • Authors: Strizhov V.V., Roman Isachenko - Experts, consultants – Valery Markin, Alina Samokhina

Task 9

  • Name: Text recognition based on skeletal representation of thick lines and convolutional networks
  • Task: It is required to build two CNNs, one recognizes a raster representation of an image, the other a vector one.
  • Data: Fonts in raster representation.
  • References::List of works [27], in particular arXiv:1611.03199 and
    • Goyal P., Ferrara E. Graph embedding techniques, applications, and performance: A survey. arXiv:1705.02801, 2017.
    • Cai H., Zheng V.W., Chang K.C.-C. A comprehensive survey of graph embedding: Problems, techniques and applications. arXiv:1709.07604, 2017.
    • Grover A., Leskovec J. node2vec: Scalable Feature Learning for Networks. arXiv:1607.00653, 2016.
    • Mestetskiy L., Semenov A. Binary Image Skeleton - Continuous Approach // Proceedings 3rd International Conference on Computer Vision Theory and Applications, VISAPP 2008. P. 251-258. URL
    • Kushnir O.A., Seredin O.S., Stepanov A.V. Experimental study of regularization parameters and approximation of skeletal graphs of binary images // Machine Learning and Data Analysis. 2014. Т. 1. № 7. С. 817-827. URL
    • Zhukova K.V., Reyer I.A. Basic Skeleton Connectivity and Parametric Shape Descriptor // Machine Learning and Data Analysis.2014. Т. 1. № 10. С. 1354-1368. URL
    • Kushnir O., Seredin O. Shape Matching Based on Skeletonization and Alignment of Primitive Chains // Communications in Computer and Information Science. 2015. V. 542. P. 123-136. URL
  • Basic algorithm: Convolution network for bitmap.
  • Solution: It is required to propose a method for collapsing graph structures, which allows generating an informative description of the thick line skeleton.
  • Novelty: A method is proposed for improving the quality of recognition of thick lines due to a new method for generating their descriptions.
  • Authors: Experts Reyer I.A., Strizhov V.V., Mark Potanin, consultant Denis Ozherelkov

Task 60

  • Name: Variational optimization of deep learning models with model complexity control
  • Task: The task of optimizing a deep learning model with a predetermined model complexity is considered. It is required to propose a model optimization method that allows generating new models with a given complexity and low computational costs.
  • Data:MNIST, CIFAR
  • References:
  • Basic algorithm: Random search
  • Solution: The proposed method is to represent a deep learning model as a hypernet (a network that generates the parameters of another network) using a Bayesian approach. Probabilistic assumptions about the parameters of deep learning models are introduced, and a variational lower estimate of the Bayesian validity of the model is maximized. The variation estimate is considered as a conditional value depending on the external parameter of complexity.
  • Novelty: The proposed method allows generating models in one-shot mode (practically without retraining) with the required model complexity, which significantly reduces the cost of optimization and retraining.
  • Authors: Oleg Bakhteev, Strizhov V.V.

Task 61

  • Name: Selecting a deep learning model based on the triplet relationship of model and sample
  • Task: Task one-shot of choosing a deep learning model is considered: choosing a model for a specific sample, issued from some general population, should not be computationally expensive.
  • Data:MNIST, synthetic data
  • References:
  • Basic algorithm: Random search
  • Solution: It is proposed to consider the space of parameters and models as two domains with their own generative models. To obtain a connection between domains, a generalization of the variational derivation to the case of triplet constraints is used.
  • Novelty: New one-shot model training method
  • Authors: Oleg Bakhteev, Strizhov V.V.

Task 64

  • Name: Theoretical validity of the application of metric classification methods using dynamic alignment (DTW) to spatiotemporal objects.
  • Task: It is necessary to study the existing theoretical justifications for applying dynamic alignment methods to various objects, and explore the use of such methods for space-time series.
    When proving the applicability of alignment methods, it is proved that the function generated by the dynamic alignment algorithm is the core. Which, in turn, justifies the use of metric classification methods.
  • References:
  • Solution: For different formulations of the DTW method (when the internal function of the distance between time series samples is different) - find and collect evidence that the function is the kernel in one place.
    For a basic set of datasets with time series (on which the accuracy of distance functions is checked ) check the fulfillment of the conditions from the Mercer theorem (positive definiteness of the matrix). Do this for various modifications of the DTW distance function. (Sakoe-Chiba band, Itakura band, weighted DTW.)
  • Novelty: Investigation of theoretical justifications for applying the dynamic alignment algorithm (DTW) and its modifications to space-time series.
  • Authors: Strizhov V.V. - Expert, Gleb Morgachev, Alexey Goncharov - consultants.

Task 66

  • Name: Agnostic neural networks
  • Task: Introduce a metric space into the problem of automatic construction (selection) of agnostic networks.
  • Data: Data from the Reinforcement learning area. Preferably the type of cars on the track.
  • References::
  • Basic algorithm: Networks from an archived article. Symbolic regression from an article in ESwA (you need to restore the code).
  • Solution: We create a model generator in the framework of symbolic regression. We create a model generator as a variational autoencoder (we won’t have time during the course). We study the metric properties of sample spaces (Euclidean) and models (Banach). We create a GAN pair - a generator-discriminator for predicting the structures of predictive models.
  • Novelty: So far, no one has succeeded. Here they discussed Tommi Yaakkola, how he came to us in Yandex. He hasn't succeeded yet either.
  • Authors: Expert Strizhov V.V., Radoslav Neichev - consultant

Task 13

  • Name: Deep learning for RNA secondary structure prediction
  • Task: RNA secondary structure is an important feature which defines RNA functional properties. Its importance can be illustrated by the fact, that it is evolutionary preserved and some types of functional RNAs always * have the same secondary structure, for example all tRNAs fold into cloverleaf. As secondary structure often defines functions, knowing RNAs secondary structure may help investigate functions of novel RNA molecules. RNA folding is not as easy as DNA folding, because RNA is single stranded molecule which forms complicated base-pairing interactions, while DNA mostly exists as fully base paired double helices. Current methods of RNA structure prediction rely on experimentally evaluated thermodynamic rules, but with thermodynamics alone only 80% of structures can be accurately predicted. We propose an AI-driven method for predicting RNA secondary structure inspired by neural machine translation model.
  • Data: RNA sequences in form of strings of characters
  • References:: https://arxiv.org/abs/1609.08144
  • Basic algorithm: https://www.ncbi.nlm.nih.gov/pubmed/16873527
  • Solution: Deep learning recurrent encoder-decoder model with attention
  • Novelty: Currently RNA secondary structure prediction still remains unsolved problem and to the best of our knowledge DL approach has never been introduced in the literature before
  • Authors: consultant Maria Popova, Alexander Isaev (we are waiting for a response from them, without a response task is removed)

Task 65

  • Name: Approximation of low-dimensional samples by heterogeneous models
  • Task: The problem of knowledge transfer (Hinton's distillation, Vapnik's privileged learning) from one network to another is investigated.
  • Data: UCI samples, see what samples are used in papers on this topic
  • References::
  • Basic algorithm: described in the work of Neichev
  • Novelty: Exploring different sampling methods
  • Solution:Try different models that are in the lectures, from non-parametric to deep ones, compare and visualize the likelihood functions
  • Authors: consultants Mark Potanin, (ask Andrey Grabovoi for help) Strizhov V.V.

Task 67

  • Name: Selection of topics in topic models for exploratory information retrieval.
  • Task: Test the hypothesis that when searching for similar documents by their topic vectors, not all topics are informative, so discarding some topics can increase the accuracy and completeness of the search. Consider the alternative hypothesis that instead of discarding topics, one can compare vectors by a weighted cosine proximity measure with adjustable weights.
  • Data: Text collections of sites habr.com and techcrunch.com. Labeled selections: queries and related documents.
  • References::
    1. Vorontsov K. V. Probabilistic Topic Modeling: An Overview of Models and Additive Regularization.
    2. Ianina A., Vorontsov K. Regularized Multimodal Hierarchical Topic Model for Document-by-Document Exploratory Search // FRUCT ISMW, 2019.
  • Basic algorithm: The topic model with regularizers and modalities described in the article (source code available).
  • Novelty:The question of informativeness of topics for vector search of thematically related documents has not been studied before.
  • Solution: Evaluate the individual informativeness of topics by throwing them out one at a time; then sort the topics by individual informativeness and determine the threshold for cutting off non-informative topics. A suggestion as to why this should work: background themes are not informative, and discarding them increases search accuracy and recall by a few percent.
  • Authors: Vorontsov K. V.в, consultant Anastasia Yanina.

Task 68

  • Name: Meta-learning of topic classification models.
  • Task: Develop universal heuristics for a priori assignment of modality weights in thematic models of text classification.
  • Data: Description of datasets, Folder with datasets.
  • References::
    1. Vorontsov K. V. Probabilistic Topic Modeling: An Overview of Models and Additive Regularization.
  • Basic algorithm: Thematic classification models for several datasets.
  • Novelty:In topic modeling, the problem of automatic selection of modality weights has not yet been solved.
  • Solution: Optimize the weights of modalities according to the quality criterion of text classification. Investigate the dependence of the optimal relative weights of modalities on the dimensional characteristics of the problem. Find formulas for estimating the initial values of modality weights without explicitly solving the problem. To reproduce datasets, apply sampling of fragments of source documents.
  • Authors: Vorontsov K. V., consultant Yulian Serdyuk.

Task 70

  • Name: Investigation of the structure of the target space when building a predictive model
  • Task:The problem of forecasting a complex target variable is studied. Complexity means the presence of dependencies (linear or non-linear). It is assumed that the initial data are heterogeneous: the spaces of the independent and target variables are of different nature. It is required to build a predictive model that would take into account the dependence in the source space of the independent variable, as well as in the space of the target variable.
  • Data: Heterogeneous data: picture - text, picture - speech and so on.
  • Basic algorithm: As basic algorithms, it is proposed to use a linear model, as well as a nonlinear neural network model.
  • Authors: Strizhov V.V. - Expert, consultant: Isachenko Roman.

Task 71

  • Name: Investigation of ways to match models by reducing the dimension of space
  • Task: The task of predicting a complex target variable is investigated. Complexity means the presence of dependencies (linear or non-linear). It is proposed to study ways to take into account dependencies in the space of the target variable, as well as the conditions under which these dependencies affect the quality of the final predictive model.
  • Data: Synthetic data with known data generation hypothesis.
  • Basic algorithm: As basic algorithms, it is proposed to use space dimensionality reduction methods (PCA, PLS, autoencoder) and linear matching models.
  • Authors: Strizhov V.V. - Expert, consultant: Isachenko Roman.

Task 72

  • Name: Construction of a single latent space in the problem of modeling heterogeneous data.
  • Task: The task of predicting a complex target variable is investigated. Complexity means the presence of dependencies (linear or non-linear). It is proposed to build a single latent space for the independent and target variables. Model matching is proposed to be carried out in the resulting low-dimensional space.
  • Data: Heterogeneous data: picture - text, picture - speech and so on.
  • Basic algorithm: As basic algorithms, it is proposed to use space dimensionality reduction methods (PCA, PLS, autoencoder) and linear matching models.
  • Authors: Strizhov V.V. - Expert, consultant: Isachenko Roman.

Task 73

  • Name: Nonlinear ranking of exploratory information search results.
  • Task: Develop an algorithm for recommending the reading order of documents (reading order, reading list) found using exploratory information retrieval. Documents should be ranked from simple to complex, from general to specific, that is, in the order in which it will be easier for the user to understand a new subject area for him. The algorithm must build a reading graph - a partial order relation on the set of found documents; in particular, it can be a collection of trees (document forest).
  • Data: Part of Wikipedia and reference reading graph derived from Wikipedia categories.
  • References::
    1. Vorontsov K. V. Probabilistic Topic Modeling: An Overview of Models and Additive Regularization.
    2. Georgia Koutrika, Lei Liu, and Steven Simske. Generating reading orders over document collections. HP Laboratories, 2014.
    3. James G. Jardine. Automatically generating reading lists. Cambridge, 2014.
  • Basic algorithm: described in the article G.Koutrika.
  • Novelty: Task has been little studied in the literature. Regularized multimodal topic models (ARTM, BigARTM) have never been applied to this problem.
  • Solution: The use of ARTM topic models in conjunction with estimates of the cognitive complexity of the text.
  • Authors: Vorontsov K. V., consultant Maxim Eremeev.

2019

Author Topic Links Consultant Reviewer
Severilov Pavel Task of searching characters in texts LinkReview

code paper slides video

Murat Apishev
Grigoriev Alexey Text recognition based on skeletal representation of thick lines and convolutional networks LinkReview

code, paper, slides video

Ilya Zharikov review Varenyk Natalia
Grishanov Alexey Automatic configuration of BigARTM parameters for a wide class of tasks LinkReview code, paperslides

video

Viktor Bulatov review Gerasimenko Nikolay
Yusupov Igor Dynamic alignment of multivariate time series LinkReview code paper slides video Alexey Goncharov
Varenyk Natalia Spherical CNN for QSAR prediction LinkReview, code, paper, slides video Maria Popova review Grigoriev Alexey
Beznosikov Alexander Z-learning of linearly-solvable Markov Decision Processes LinkReview

paper code slides video

Yury Maximov
Panchenko Svyatoslav Obtaining a simple sample at the output of the neural network layer LinkReview,

code, paper, slides

Gadaev Tamaz
Veselova Evgeniya Deep Learning for reliable detection of tandem repeats in 3D protein structures Code link review paper slides video Guillaume Pages, Sergei Grudinin
Aminov Timur Quality Prediction for a Feature Selection Procedure LinkReview code paper

slides

Roman Isachenko
Markin Valery Investigation of the properties of local models in the spatial decoding of brain signals LinkReview

code paper slides video

Roman Isachenko
Abdurahmon Sadiev Generation of features using locally approximating models LinkReview

code, paper, slides video

Anastasia Motrenko
Tagir Sattarov Machine translation training without parallel texts. LinkReview code paper, slides video Oleg Bakhteev
Gerasimenko Nikolay Thematic search for similar cases in the collection of acts of arbitration courts. LinkReview code paper slides video Ekaterina Artyomova reviewGrishanov Alexey

Task 40

  • Name: Quality prediction for the feature selection procedure.
  • Task: The solution of the feature selection problem is reduced to enumeration of binary cube vertices. This procedure cannot be performed for a sample with a large number of features. It is proposed to reduce this problem to optimization in a linear space.
  • Data: Synthetic data + simple samples
  • References::
    1. Bertsimas D. et al. Best subset selection via a modern optimization lens //The annals of statistics. – 2016. – Т. 44. – №. 2. – С. 813-852.
    2. Luo R. et al. Neural architecture optimization //Advances in Neural Information Processing Systems. – 2018. – С. 7827-7838.
  • Basic algorithm: Popular feature selection methods.
  • Solution: In this paper, it is proposed to build a model that, based on a set of features, predicts the quality on a test sample. To do this, a mapping of a binary cube into a linear space is constructed. After that, the quality of the model in linear space is maximized. To reconstruct the solution of the problem, the model of inverse mapping into a binary cube is used.
  • Novelty: A constructively new approach to solving the problem of choosing models is proposed.
  • Authors: Strizhov V.V., Tetiana Aksenova, consultant – Roman Isachenko

Task 42

  • Name: Z-learning of linearly-solvable Markov Decision Processes
  • Task: Adapt Z-learning from [1] to the case of Markov Decision Process discussed in [2] in the context of energy systems. Compare it with standard (in reinforcement learning) Q-learning.
  • Data: We consider a Markov Process described via transition probability matrix. Given initial state vector (probability of being in a state at time zero), we generate data for the time evolution of the state vector. See [2] for an exemplary process describing evolution of an ensemble of energy consumers.
  • References::
    1. E. Todorov. Linearly-solvable Markov decision problems https://homes.cs.washington.edu/~todorov/papers/TodorovNIPS06.pdf
    2. Ensemble Control of Cycling Energy Loads: Markov Decision Approach. Michael Chertkov, Vladimir Y. Chernyak, Deepjyoti Deka. https://arxiv.org/abs/1701.04941
    3. Csaba Szepesvári. Algorithms for Reinforcement Learning. https://sites.ualberta.ca/~szepesva/papers/RLAlgsInMDPs.pdf
  • Basic algorithm: Principal comparison should be made with Q learning described in [3]
  • Solution: We suppose that plugging in algorithm from [1] directly into [2] gives faster and more reliable solution.
  • Novelty: In the area of power systems there is a huge demand on fast reinforcement learning algorithms, but there is still a lack of that (in particular the ones respect the physics/underlying graph)
  • Authors: Yury Maximov (consultant, expert), Michael Chertkov (expert)

Task 1

  • Name: Forecasting the direction of movement of the price of exchange instruments according to the news flow.
  • Task: Build and explore a model for predicting the direction of price movement. Given a set of news S and a set of timestamps T corresponding to the time of publication of news from S. 2. Time series P, corresponding to the price of an exchange instrument, and time series V, corresponding to the volume of sales for this instrument, for a period of time T'. 3. The set T is a subset of the time period T'. 4. Time intervals w=[w0, w1], l=[l0, l1], d=[d0, d1], where w0 < w1=l0 < l1=d0 < d1. It is required to predict the direction of movement of the price of an exchange instrument at the time t=d0 according to the news released in the period w.
  • Data:
    1. Financial data: data on quotes (at one tick interval) of several financial instruments (GAZP, SBER, VTBR, LKOH) for the 2nd quarter of 2017 from the Finam.ru website; for each point of the series, the date, time, price and volume are known.
    2. Text data: economic news for the 2nd quarter of 2017 from Forexis; each news is a separate html file.
  • References:
    1. Usmanova K.R., Kudiyarov S.P., Martyshkin R.V., Zamkovoy A.A., Strijov V.V. Analysis of relationships between indicators in forecasting cargo transportation // Systems and Means of Informatics, 2018, 28(3).
    2. Kuznetsov M.P., Motrenko A.P., Kuznetsova M.V., Strijov V.V. Methods for intrinsic plagiarism detection and author diarization // Working Notes of CLEF, 2016, 1609 : 912-919.
    3. Aysina Roza Munerovna, Thematic modeling of financial flows of corporate clients of a bank based on transactional data, final qualification work.
    4. Lee, Heeyoung, et al. "On the Importance of Text Analysis for Stock Price Prediction." LREC. 2014.
  • Basic algorithm: Method used in the article (4).
  • Solution: Using topic modeling (ARTM) and local approximation models to translate a sequence of texts corresponding to different timestamps into a single feature description. Quality criterion: F1-score, ROC AUC, profitability of the strategy used.
  • Novelty: To substantiate the connection of time series, the Converging cross-mapping method is proposed.
  • Authors: Ivan Zaputlyaev (consultant), Strizhov V.V., K.V. Vorontsov (Experts)

Task 3

  • Name: Dynamic alignment of multidimensional time series.
  • Task: A characteristic multidimensional time series is the trajectory of a point in 3-dimensional space. The two trajectories need to be optimally aligned with each other. For this, the distance DTW between two time series is used. In the classical representation, DTW is built between one-dimensional time series. It is necessary to introduce various modifications of the algorithm for working with high-dimensional time series: trajectories, corticograms.
  • Data: The data describes 6 classes of time series from the mobile phone's accelerometer. https://sourceforge.net/p/mlalgorithms/code/HEAD/tree/Group274/Goncharov2015MetricClassification/data/
  • References:
    1. Multidimensional DTW: https://pdfs.semanticscholar.org/76d3/5bd5a52453ebde80faaa1467d7effd74426f.pdf
  • Basic algorithm: Using L_p distances between two dimensions of a time series, their modifications.
  • Solution: Investigation of distances resistant to change of coordinate order, studies of distances unstable to change of coordinate order. Experiments with other types of distances (cosine, RBF, others).
  • Novelty: There is no complete review and study of methods for working with multivariate time series. The dependence of the quality of the solution on the selected distances between measurements has not been studied.
  • Authors: Alexey Goncharov - consultant, Expert, Strizhov V.V. - Expert

Task 43

  • Name: Getting a simple sample at the output of the neural network layer
  • Task: The output of the neural network is usually a generalized linear model over the outputs of the penultimate layer. It is necessary to propose a way to test the simplicity of the sample and its compliance with the generalized linear model (linear regression, logistic regression) using a system of statistical criteria.
  • Data: For the computational experiment, it is proposed to use classical samples from the UCI repository. Link to samples https://github.com/ttgadaev/SampleSize/tree/master/datasets
  • References:: http://www.ccas.ru/avtorefe/0016d.pdf c 49-63 Bishop, C. 2006. Pattern Recognition and Machine Learning. Berlin: Springer. $758
  • Basic algorithm: White test, Wald test, Goldfeld-Quantum test, Durbin-Watson, Chi-square, Fry-Behr, Shapiro-Wilk
  • Solution: The system of tests for checking the simplicity of the sample (and the adequacy of the model), the independent variables are not random, the dependent variables are distributed normally or binomially, there are no gaps and outliers, the classes are balanced, the sample is approximated by a single model. The variance of the error function does not depend on the independent variable. The study is based on synthetic and real data.
  • Authors: Gadaev T. T. (consultant) Strizhov V.V., Grabovoi A.V. (Experts)

Task 14

  • Name: Deep Learning for reliable detection of tandem repeats in 3D protein structures more in PDF
  • Task: Deep learning algorithms pushed computer vision to a level of accuracy comparable or higher than a human vision. Similarly, we believe that it is possible to recognize the symmetry of a 3D object with a very high reliability, when the object is represented as a density map. The optimization problem includes i) multiclass classification of 3D data. The output is the order of symmetry. The number of classes is ~10-20 ii) multioutput regression of 3D data. The output is the symmetry axis (a 3-vector). The input data are typically 24x24x24 meshes. The total amount of these meshes is of order a million. Biological motivation : Symmetry is an important feature of protein tertiary and quaternary structures that has been associated with protein folding, function, evolution, and stability. Its emergence and ensuing prevalence has been attributed to gene duplications, fusion events, and subsequent evolutionary drift in sequence. Methods to detect these symmetries exist, either based on the structure or the sequence of the proteins, however, we believe that they can be vastly improved.
  • Data: Synthetic data are obtained by ‘symmetrizing’ folds from top8000 library (http://kinemage.biochem.duke.edu/databases/top8000.php).
  • References:: Our previous 3D CNN: [30] Invariance of CNNs (and references therein): 01630265/document, [31]
  • Basic algorithm: A prototype has already been created using the Tensorflow framework [4], which is capable of detecting the order of cyclic structures with about 93% accuracy. The main goal of this internship is to optimize the topology of the current neural network prototype and make it rotational and translational invariant with respect to input data. [4] [32]
  • Solution: The network architecture needs to be modified according to the invariance properties (most importantly, rotational invariance). Please see the links below [33], [34] The code is written using the Tensorflow library, and the current model is trained on a single GPU (Nvidia Quadro 4000)of a desktop machine.
  • Novelty: Applications of convolutional networks to 3D data are still very challenging due to large amount of data and specific requirements to the network architecture. More specifically, the models need to be rotationally and transnationally invariant, which makes classical 2D augmentation tricks loosely applicable here. Thus, new models need to be developed for 3D data.
  • Authors: Expert Sergei Grudinin, consultants Guillaume Pages

Task 46

  • Name: Task of searching characters in texts
  • Task: In the simplest case, this Task is reduced to the Sequence Labeling task on a labeled selection. The difficulty lies in obtaining a sufficient amount of training data, that is, it is required to obtain a larger sample from the existing small Expert markup (automatically by searching for patterns or by compiling a simple and high-quality markup instruction, for example, in Toloka). The presence of markup allows you to start experimenting with the selection of the optimal model, various neural network architectures (BiLSTM, Transformer, etc.) may be of interest here.
  • Data: Dictionary of symbols, Marked artistic texts
  • References: http://www.machinelearning.ru/wiki/images/0/05/Mmta18-rnn.pdf
  • Basic algorithm: HMM, RNN
  • Solution: It is proposed to compare the work of several state-of-the-art algorithms. Propose a classifier quality metric for characters (character/non-character). Determine applicability of methods.
  • Novelty: The proposed approach to text analysis is used by Experts in manual mode and has not been automated
  • Authors: M. Apishev (consultant), D. Lemtyuzhnikova

Task 47

  • Name: Deep learning for RNA secondary structure prediction
  • Task: RNA secondary structure is an important feature which defines RNA functional properties. Its importance can be illustrated by the fact, that it is evolutionary preserved and some types of functional RNAs always * have the same secondary structure, for example all tRNAs fold into cloverleaf. As secondary structure often defines functions, knowing RNAs secondary structure may help investigate functions of novel RNA molecules. RNA folding is not as easy as DNA folding, because RNA is single stranded molecule which forms complicated base-pairing interactions, while DNA mostly exists as fully base paired double helices. Current methods of RNA structure prediction rely on experimentally evaluated thermodynamic rules, but with thermodynamics alone only 80% of structures can be accurately predicted. We propose an AI-driven method for predicting RNA secondary structure inspired by neural machine translation model.
  • Data: RNA sequences in form of strings of characters
  • References:: https://arxiv.org/abs/1609.08144
  • Basic algorithm: https://www.ncbi.nlm.nih.gov/pubmed/16873527
  • Solution: Deep learning recurrent encoder-decoder model with attention
  • Novelty: Currently RNA secondary structure prediction still remains unsolved problem and to the best of our knowledge DL approach has never been introduced in the literature before
  • Authors: consultant Maria Popova Chapel-Hill

Task 4

  • Name: Automatic setting of ARTM parameters for a wide class of tasks.
  • Task: The bigARTM open library allows you to build topical models using a wide class of possible regularizers. However, this flexibility makes the task of setting the coefficients very difficult. This tuning can be greatly simplified by using the relative regularization coefficients mechanism and automatic selection of N-grams. We need to test the hypothesis that there is a universal set of relative regularization coefficients that gives "reasonably good" results on a wide class of problems. Several datasets are given with some external quality criterion (for example, classification of documents into categories or ranking). We find the best parameters for a particular dataset, giving the "locally the best model". We find the bigARTM initialization algorithm that produces thematic models with quality comparable to the "locally best model" on its dataset. Comparability criterion in quality: on this dataset, the quality of the "universal model" is no more than 5% worse than that of the "locally best model".
  • Data: Victorian Era Authorship Attribution Data Set, uci.edu/ml/datasets/Twenty+Newsgroups 20 Newsgroups, ICD-10, search/ranking triplets.
  • References:
    1. WRC by Nikita Doykov: http://www.machinelearning.ru/wiki/images/9/9f/2015_417_DoykovNV.pdf
    2. Presentation by Viktor Bulatov at a scientific seminar: https://drive.google.com/file/d/19pJ21LRPeeOxY4mkcSnQCRm93zOO4J5b/view
    3. Draft with formulas: https://drive.google.com/open?id=1AqS7snUsSJ18ZYBtC-6uP_2dMTDJSGeD
  • Basic algorithm: PLSA / LDA / logregression.
  • Solution: bigARTM with background themes and smoothing, sparseness and decorrelation regularizers (coefficients picked up automatically), as well as automatically selected N-grams.
  • Novelty: The need for automated tuning of model parameters and the lack of such implementations in the scientific community.
  • Authors: consultant Viktor Bulatov, ExpertVorontsov K. V..

Task 50

  • Name: Thematic search for similar cases in the collection of acts of arbitration courts.
  • Task: Build an information retrieval algorithm for a collection of acts of arbitration courts. The request can be an arbitrary document of the collection (the text of the act). The search result should be a list of documents in the collection, ranked in descending order of relevance.
  • Data: collection of text documents — acts of arbitration courts http://kad.arbitr.ru.
  • References:
    1. Anastasia Yanina. Thematic exploratory information search. 2018. FIVT MIPT.
    2. Ianina A., Golitsyn L., Vorontsov K. Multi-objective topic modeling for exploratory search in tech news. AINL-2017. CCIS, Springer, 2018.
    3. Ahmed El-Kishky, Yanglei Song, Chi Wang, Clare Voss, Jiawei Han. Scalable Topical Phrase Mining from Text Corpora. 2015.
  • Basic algorithm: BigARTM with decorrelation, smoothing, sparse regularizers. Search by TF-IDF of words, by TF-IDF of UPA links, by thematic vector representations of documents, using a cosine proximity measure. TopMine algorithm for collocation detection.
  • Solution: Add modality of links to legal acts. Add modality of legal terms. Choose the optimal number of topics and regularization strategy. Organize the process of marking pairs of documents. Implement the evaluation of the quality of the search for a labeled sample of pairs of documents.
  • Novelty: The first attempt to use ARTM for thematic search of legal texts.
  • Authors: consultant Ekaterina Artyomova, Expert Vorontsov K. V..

Group 2

Author Topic Links Consultant Reviewer
Vishnyakova Nina Optimal Approximation of Non-linear Power Flow Problem LinkReview paper code presentation video Yury Maximov reviewer Loginov Roman

review

Kudryavtseva Polina Intention forecasting. Building an optimal signal decoding model for modeling a brain-computer interface. code

LinkReview paper video presentation

Roman Isachenko Nechepurenko Ivan

review

Loginov Roman Multi-simulation as a universal way to describe a general sample code

LinkReview paper ChatInvite presentation video

Aduenko A. A. Макаров Михаил review
Mikhail Makarov Location determination by accelerometer signals code

LinkReview paper presentation video

Anastasia Motrenko Cherepkov Anton: review
Kozinov Alexeyй Task of finding characters in images LinkReview

paper code

M. Apishev,

D. Lemtyuzhnikova

Gracheva Anastasia review
Buchnev Valentin Early prediction of sufficient sample size for a generalized linear model. LinkReview

paper code presentation video

Grabovoi A.V. reviewer
Nechepurenko Ivan Multisimulation, privileged training code,

paper, LinkReview presentation

R. G. Neichev Kudryavtseva Polina
Gracheva Anastasia Estimation of binding energy of protein and small molecules code

paper LinkReview presentation video

Sergei Grudinin,

Maria Kadukova

reviewer
Cherepkov Anton Privileged learning in the problem of iris boundary approximation paper, slides, code, LinkReview

video

R. G. Neichev Lepekhin Mikhail

preliminary review

Lepekhin Mikhail Creation of ranking models for information retrieval systems. Algorithm for Predicting the Structure of Locally Optimal Models code

LinkReview paper presentation video

Andrey Kulunchakov Vishnyakova Nina, review
Gridasov Ilya Automatic construction of a neural network of optimal complexity LinkReview

paper Presentation code

O. Yu. Bakhteev, Strizhov V.V. Buchnev Valentin
Telenkov Dmitry Brain signal decoding and intention prediction LinkReview

git The paper Presentation code

Andrey Zadayanchuk reviewer

Task 18

  • Name: Forecasting intentions. Building an optimal signal decoding model for modeling a brain-computer interface.
  • Task: The Brain Computer Interface (BCI) allows you to help people with disabilities regain their mobility. According to the available description of the device signal, it is necessary to simulate the behavior of the subject.
  • Data: Data sets of ECoG/EEG brain signals.
  • References::
    • Motrenko A.P., Strijov V.V. Multi-way feature selection for ECoG-based brain-computer Interface // Expert systems with applications. - 2018.
  • Basic algorithm: It is proposed to compare with the partial least squares algorithm.
  • Solution: In this work, it is proposed to build a single system that solves the problem of signal decoding. As stages of building such a system, it is proposed to solve the problems of data preprocessing, feature space extraction, dimensionality reduction and selection of a model of optimal complexity. It is proposed to use the tensor version of PLS with feature selection.
  • Novelty: In the formulation of the problem, the complex nature of the signal is taken into account: a continuous trajectory of movement, the presence of discrete structural variables (fingers or joint movement), the presence of continuous variables (position of a finger or limb).
  • Authors: Strizhov V.V., Tetiana Aksenova, consultant – Roman Isachenko

Task 41

  • Name: Optimal Approximation of Non-linear Power Flow Problem
  • Task: Our goal is to approximate the solution of non-linear non-convex optimal power flow problem by solving a sequence of convex optimization problems (aka trust region approach). On this way we propose to compare various approaches for an approximate solution of this problem with adaptive approximation of the power flow non-linearities with a sequence of quadratic and/or piece-wise linear functions
  • Data: Matpower module from MATLAB contains all necessary test cases. Start considering IEEE 57 bus case.
  • References::
    1. Molzahn, D. K., & Hiskens, I. A. (2019). A survey of relaxations and approximations of the power flow equations. Foundations and Trends in Electric Energy Systems, 4(1-2), 1-221. https://www.nowpublishers.com/article/DownloadSummary/EES-012
    2. The QC Relaxation: A Theoretical and Computational Study on Optimal Power Flow. Carleton Coffrin ; Hassan L. Hijazi; Pascal Van Hentenryck https://ieeexplore.ieee.org/abstract/document/7271127/
    3. Convex Relaxations in Power System Optimization: A Brief Introduction. Carleton Coffrin and Line Roald. https://arxiv.org/pdf/1807.07227.pdf
    4. Optimal Adaptive Linearizations of the AC Power Flow Equations. Sidhant Misra, Daniel K. Molzahn, and Krishnamurthy Dvijotham https://molzahn.github.io/pubs/misra_molzahn_dvijotham-adaptive_linearizations2018.pdf
  • Basic algorithm: A set of algorithms described in [1] should be considered to compare with, details behind the proposed method would be shared by the consultant (a draft of the paper)
  • Solution: to figure out the quality of the solution we propose to compare it with the ones given by IPOPT and numerous relaxations, and do some reverse engineering regarding to our method
  • Novelty: The OPF is a truly hot topic in power systems, and is of higher interest by the discrete optimization community (as a general QCQP problem). Any advance in this area is of higher interest by the community
  • Authors: Yury Maximov (consultant and expert), Michael Chertkov (expert)
  • Notes: the problem has both the computational and the theoretical focuses, so 2 students are ok to work on this topic

Task 2

  • Name: Investigation of reference objects in the problem of metric classification of time series.
  • Task: The DTW function is the distance between two time series that can be non-linearly warped relative to each other. It looks for the best alignment between two objects, so it can be used in a metric object classification problem. One of the methods for solving the problem of metric classification is measuring distances to reference objects and using the vector of these distances as an indicative description of the object. The DBA method is an algorithm for constructing centroids (reference objects) for time series based on the DTW distance. When plotting the distance between the time series and the centroid, different pairs of values (eg peak values) are more specific to one of the classes, and the impact of such coincidences on the distance value should be higher.

It is necessary to explore various ways of constructing reference objects, as well as determining their optimal number. The criterion is the quality of the metric classifier in the task. In the DBA method, for each centroid, it is proposed to create a weight vector that demonstrates the "significance" of the measurements of the centroid, and use it in the modified weighted-DTW distance function.

Literature research and a combination of up-to-date methods.

  • Novelty: There has not been a comprehensive study of various methods of constructing centroids and reference elements along with the choice of their optimal number.
  • Authors: Alexey Goncharov - consultant, Expert, Strizhov V.V. - Expert

Task 7

  • Name: Privileged learning in the iris boundary approximation problem
  • Task: Based on the image of the human eye, determine the circles approximating the inner and outer border of the iris.
  • Data: Bitmap monochrome images, typical size 640*480 pixels (however other sizes are possible)[35], [36].
  • References::
    • Aduenko A.A. Selection of multi-models in Tasks classification (supervisor Strizhov V.V.). Moscow Institute of Physics and Technology, 2017. [37]
    • K.A. Gankin, A.N. Gneushev, I.A. Matveev Segmentation of the iris image based on approximate methods with subsequent refinements // Izvestiya RAN. Theory and control systems, 2014, no. 2, p. 78–92.
    • Duda, R. O. Use of the Hough transformation to detect lines and curves in pictures / R. O. Duda, P. E. Hart // Communications of the ACM. 1972 Vol. 15, no. 1.Pp.
  • Basic algorithm: Efimov Yury. Search for the outer and inner boundaries of the iris in the eye image using the paired gradient method, 2015.
  • Solution: See iris_circle_problem.pdf
  • Novelty: A fast non-enumerative algorithm for approximating boundaries using linear multimodels is proposed. Additionally, capsule neural networks.
  • consultant: Radoslav Neichev (by Strizhov V.V., Expert Matveev I.A.)

Task 44

  • Name: Early prediction of sufficient sample size for a generalized linear model.
  • Task: The problem of designing an experiment is being investigated. The Task of estimating a sufficient sample size according to the data is solved. The sample is assumed to be simple. It is described by an adequate model. Otherwise, the sample is generated by a fixed probabilistic model from a known class of models. The sample size is considered sufficient if the model is restored with sufficient confidence. It is required, knowing the model, to estimate a sufficient sample size at the early stages of data collection.
  • Data: For the computational experiment, it is proposed to use classical samples from the UCI repository. Link to samples https://github.com/ttgadaev/SampleSize/tree/master/datasets
  • References::
    1. [Overview of methods for estimating sample size]
    2. http://svn.code.sf.net/p/mlalgorithms/code/PhDThesis/.
    3. Bootstrap method. https://projecteuclid.org/download/pdf_1/euclid.aos/1.

Bishop, C. 2006. Pattern Recognition and Machine Learning. Berlin: Springer. $758

  • Basic algorithm: We will say that the sample size is sufficient if the log-likelihood has a small variance, on a sample of size m calculated using bootstrap.

We are trying to approximate the dependence of the average value of log-likelihood and its variance on the sample size.

  • Solution: The methods described in the review are asymptotic or require a deliberately large sample size. The new method should be to predict volume in the early stages of experiment design, i.e. when data is scarce.
  • Authors: Grabovoi A.V. (consultant), Gadaev T. T. Strizhov V.V. (Experts)
  • Note: to determine the simplicity of the sample, a new definition of complexity is proposed (Sergey Ivanychev). This is a separate work, +1 Task 44a (? Katruza).

Task 15

  • Name: Formulation and solution of an optimization problem combining classification and regression to estimate the binding energy of a protein and small molecules. Task description [38]
  • Task: From a bioinformatics point of view, the Task is to estimate the free energy of protein binding to a small molecule (ligand): the best ligand in its best position has the lowest free energy of interaction with the protein. (Following a large text, see the file at the link above.)
  • Data:
    • Data for binary classification. Approximately 12,000 protein-ligand complexes: for each of them there is 1 native position and 18 non-native ones. The main descriptors are histograms of distributions of distances between different atoms of the protein and ligand, the dimension of the vector of descriptors is ~ 20,000. In the case of continued research and publication in a specialized journal, the set of descriptors can be expanded. The data will be provided as binary files with a python script to read.
    • Data for regression. For each of the presented complexes, the value of the quantity is known, which can be interpreted as the binding energy.
  • References::
  • Basic algorithm: [42] In the classification problem, we used an algorithm similar to linear SVM, whose relationship with the energy estimate is beyond the scope of the classification problem, described in the above article. Various loss functions can be used in a regression problem.
  • Solution: It is necessary to connect the previously used optimization problem with the regression problem and solve it using standard methods. Cross-validation will be used to check the operation of the algorithm. There is a separate test set consisting of (1) 195 complexes of proteins and ligands, for which it is necessary to find the best ligand pose (the algorithm for obtaining ligand positions differs from that used in training), (2) complexes of proteins and ligands, for which native poses it is necessary to predict the energy binding, and (3) 65 proteins for which the most strongly binding ligand is to be found.
  • Novelty: First of all, the interest is combining classification and regression problems. The correct assessment of the quality of protein and ligand binding is used in drug development to search for molecules that interact most strongly with the protein under study. Using the classification problem described above to predict the binding energy results in an insufficiently high correlation of predictions with experimental values, while using the regression problem alone leads to overfitting.
  • Authors Sergei Grudinin, Maria Kadukova

Task 27

  • Name: Creation of ranking models for information retrieval systems. Algorithm for Predicting the Structure of Locally Optimal Models
  • Task: It is required to predict a time series using some parametric superposition of algebraic functions. It is proposed not to cost the prognostic model, but to predict it, that is, to predict the structure of the approximating superposition. A class of considered superpositions is introduced, and on the set of such structural descriptions, a search is made for a locally optimal model for the problem under consideration. The task consists in 1) searching for a suitable structural description of the model 2) describing the search algorithm for the structure that will correspond to the optimal model 3) describing the algorithm for inverse construction of the model according to its structural description. For an already existing example of the answer to questions 1-3, see the works of A. A. Varfolomeeva.
  • Data:
    • Collection of text documents TREC (!)
    • A set of time series, which implies the restoration of functional dependencies. It is proposed to first use synthetic data or immediately apply the algorithm to forecasting time series 1) electricity consumption 2) physical activity with subsequent analysis of the resulting structures.
  • References::
    1. (!) Kulunchakov A.S., Strijov V.V. Generation of simple structured Information Retrieval functions by genetic algorithm without stagnation // Expert Systems with Applications, 2017, 85: 221–230.
    2. A. A. Varfolomeeva Selection of features when marking up bibliographic lists using structural learning methods, 2013, [43]
    3. Bin Cao, Ying Li and Jianwei Yin Measuring Similarity between Graphs Based on the Levenshtein Distance, 2012, [44]
  • Basic algorithm: Described in [1]. Developed in the work of the 974 group team. It is proposed to use their code and experiment.
  • Solution: It is proposed to try to repeat the experiment of A. A. Varfolomeeva for a different structural description in order to understand what is happening. The superposition of algebraic functions defines an ortree, on the vertices of which the labels of the corresponding algebraic functions or variables are given. Therefore, the structural description of such a superposition can be its DFS-code. This is a string consisting of vertex labels, written in the order in which the tree is traversed by depth-first search. Knowing the arities of the corresponding algebraic functions, we can restore any such DFS-code in O(n) and get back the superposition of functions. On the set of similar string descriptions, it is proposed to search for the string description that will correspond to the optimal model.
  • Authors: consultant Andrey Kulunchakov (Inria Montbonnot), Expert Strizhov V.V.

Task 26

  • Name: Accelerometer positioning
  • Task: Given initial coordinates, accelerometer signals, additional information (gyroscope, magnetometer signals). Possibly inaccurate map given (Task SLAM)
  • Data: from [1], self-collected data.
  • References::
    1. https://arxiv.org/pdf/1712.09004.pdf
    2. https://ieeexplore.ieee.org/document/1528431
  • Basic algorithm: from [1].
  • Solution: Search for a priori and additional information that improves positioning accuracy.
  • Novelty: Statement of the problem in terms of Projection to Latent Spaces
  • Authors: consultant Anastasia Motrenko, Expert Ilya Gartseev , Strizhov V.V.

Task 45

  • Name: Task of searching characters in images
  • Task: This Task in one of the formulation options can be reduced to two sequential operations: 1) searching for objects in the image and determining their class 2) searching the database for information about the symbolic meaning of the found objects. The main difficulty in solving the problem lies in the search for objects in the image. However, the following classification may also be difficult due to the fact that the image of the object may be incomplete, unusually stylized, and the like.
  • Data: Dictionary of Symbols Museum Sites Image-net
  • References:
    1. http://www.machinelearning.ru/wiki/images/e/e2/IDP18.pdf (p. 116)
    2. http://www.image-net.org
  • Basic algorithm: CNN
  • Solution: It is proposed to compare the work of several state-of-the-art algorithms. Suggest a quality metric for searching and classifying objects. Determine applicability of methods.
  • Novelty: The proposed image analysis approach is used by Experts in manual mode and has not been automated
  • Authors: M. Apishev (consultant), D. Lemtyuzhnikova

Task 28

  • Name: Multi-simulation as a universal way to describe a general sample
  • Task: Build a method for incremental refinement of the multimodel structure when new objects appear. Development and comparison of different algorithms for updating the structure of multimodels. Construction of an optimal scheme for refining the structure of a multimodel depending on the total sample size.
  • Data: At the initial stage of work, synthetic data with a known statistical structure is used. Testing of the developed methods is carried out on real data from the UCI repository.
  • References:
  1. Bishop, Christopher M. "Pattern recognition and machine learning." Springer, New York (2006).
  2. Gelman, Andrew, et al. Bayesian data analysis, 3rd edition. Chapman and Hall/CRC, 2013.
  3. MacKay, David JC. "The evidence framework applied to classification networks." Neural computation 4.5 (1992): 720-736.
  4. Aduenko A. A. "Choice of multimodels in Task classification" Ph.D. thesis
  5. Motrenko, Anastasiya, Strizhov V.V., and Gerhard-Wilhelm Weber. "Sample size determination for logistic regression." Journal of Computational and Applied Mathematics 255 (2014): 743-752.
  • Basic algorithm: Algorithm for constructing adequate multi-models from #4.
  • Solution: Bayesian approach to the problem of choosing models based on validity. Analysis of the properties of validity and its relationship with statistical significance.
  • Novelty: A method is proposed for constructing an optimal scheme for updating the structure of a multimodel when new objects appear. The relationship between validity and statistical significance for some classes of models has been studied.
  • Authors: Strizhov Vadim Viktorovich, Aduenko Alexander Alexandrovich (GMT-5)

Task 11

Task 48

Task 49

  • Name: Brain signal decoding and intention prediction
  • Task: It is required to build a model that restores the movement of the limbs according to the corticogram.
  • Data: neurotycho.org [9] (or fingers)
  • References:
    • Neichev R.G., Katrutsa A.M., Strizhov V.V. Selection of the optimal set of features from a multicorrelated set in the forecasting problem. Zavodskaya Lab. Materials Diagnostics, 2016, 82(3) : 68-74. [10]
    • Isachenko R.V., Strijov V.V. Quadratic Programming Optimization with Feature Selection for Non-linear Models // Lobachevskii Journal of Mathematics, 2018, 39(9) : 1179-1187. article
  • Basic algorithm: Partial Least Squares[11]
  • Solution: Create a feature selection algorithm alternative to PLS and taking into account the non-orthogonal feature interdependence structure.
  • Novelty: A feature selection method is proposed that takes into account the regularities of both the and independent variable and the dependent variable. Bonus: Explore changes in model structure as the nature of the sample changes.
  • Authors: Andrey Zadayanchuk, Strizhov V.V.

2018

Autumn 2018

Number Project name materials Team
0 (Example) Metric classification of time series code,

LinkReview, Discussion

Alexey Goncharov*, Maxim Savinov
1 Forecasting the direction of movement of the price of exchange instruments according to the news flow Code,

LinkReview, Slides, Report

Alexander Borisov,

Drobin Maxim, Govorov Ivan, Mukhitdinova Sofia, Valentin Rodionov, Valentin Akhiyarov

2 Construction of reference objects for a set of multidimensional time series Code

LinkReview

Iskhakov Rishat,

Korepanov Georgy, Solodnev Samirkhanov Danil

3 Dynamic alignment of multivariate time series Code

LinkReview Slides Report

Gleb Morgachev,

Vladislav Smirnov, Tatiana Lipnitskaya

4 Automatic adjustment of ARTM parameters for a wide class of tasks Code,

LinkReview, Presentation

Golubeva Tatiana,

Ivanova Ekaterina, Matveeva Svetlana, Trusov Anton, Tsaritsyn Mikhail, Chernonog Vyacheslav

5 Finding paraphrases Code,

LinkReview

Stas Okrug, Nikita Mokrov

Fedor Kitashov, Polina Proskura, Natalia Basimova, Roman Krasnikov, Akhmedkhan Shabanov

6 On conformational changes of proteins using collective motions in torsion angle space and L1 regularization Code,

LinkReview Presentation

Ryabinina Raisa, Emtsev Daniil
7 Privileged training in the problem of approximating the borders of the iris Code,

LinkReview

Pavel Fedosov, Alexey Gladkov,

Genrikh Kenigsberger, Ivan Korostelev, Nikolay Balakin

8 Generation of features using locally approximating models Code,

LinkReview

Ibrahim Kurashov, Nail Gilmutdinov,

Albert Mulyukov, Valentin Spivak

9 Text recognition based on skeletal representation of thick lines and convolutional networks Code, LiteratureReview, Slides, report Kutsevol Polina

Lukoyanov Artem Korobov Nikita Boyko Alexander Litovchenko Leonid Valukov Alexandr Badrutdinov Kamil Yakushevskiy Nikita Valyukov Nikolay Tushin Kirill


10 Comparison of neural network and continuous-morphological methods in the problem of text detection Code, LinkReview, Discussion, Presentation Gaiduchenko Nikolay

Torlak Artyom Akimov Kirill Mironova Lilia Gonchar Daniel

11 Automatic construction of a neural network of optimal complexity Code, LinkReview, report, slides Nikolai Goryan

Alexander Ulitin Tovkes Artem Taranov Sergey Gubanov Sergey Krinitsky Konstantin Zabaznov Anton Valery Markin

12 Machine translation training without parallel texts. Code,

LinkReview, Report, Slides

Alexander Artemenkov

Angelina Yaroshenko Andrey Stroganov Egor Skidnov Anastasia Borisova Ryabov Fedor Mazurov Mikhail

13 Deep learning for RNA secondary structure prediction Code

Link Review

Dorokhin Semyon

Pastukhov Sergey Pikunov Andrey Nesterova Irina Anna chat

14 Deep Learning for reliable detection of tandem repeats in 3D protein structures Code

Link Review

Veselova Evgeniya
15 Formulation and solution of an optimization problem combining classification and regression to estimate the binding energy of a protein and small molecules Code

Link Review

Merkulova Anastasia

Plumite Elvira Zhiboyedova Anastasia chat

16 Estimation of the optimal sample size for research in medicine Code

Link Review

Artemy Kharatyan,

Mikhail Mikheev, Evgin Alexander, Seppar Alexander, Konoplyov Maxim, Murlatov Stanislav, Makarenko Stepan

17 Intention forecasting. Investigation of the properties of local models in the spatial decoding of brain signals Code,

LinkReview, Presentation

Natalia Bolobolova,

Alina Samokhina, Shiyanov Vadim

18 Intention forecasting. Building an optimal signal decoding model for modeling a brain-computer interface. Code,

LinkReview, Presentation, Article

Ivan Nasedkin, Galiya Latypova,

Nestor Sukhodolsky, Alexander Shemenev Ivan Borodulin,

19 Investigation of the dependence of the quality of recognition of ontological objects on the depth of hyponymy. Code,

Report, LinkReview, Presentation

Вячеслав Резяпкин, Alexey Russkin,

Victoria Dochkina, Miron Kuznetsov, Yarmoshyk Demyan

20 Comparison of the quality of end-to-end trainable models in the task of answering questions in a dialogue, taking into account the context Code

LinkReview Report, Presentation

Agafonov Alexey, Ryakin Ilya,Litvinenko Vladimir,

Khokhlov Ivan, Velikovsky Nikita, Anufrienko Oleg

21 High order convex optimization methods Code,

LinkReview, Slides

Selikhanovich Daniel,

Sokolov Igor

23 Fractal analysis and synthesis of optical images of sea waves code,

LinkReview, Presentation Report

Kanygin Yuri
24 Entropy maximization for various types of image transformations code,

LinkReview, report, slides

Nikita Voskresensky,

Alisa Shabalina, Yaroslav Murzaev, Alexey Khokhlov, Alexey Kazakov, Olga Gribova, Alexander Belozertsev

25 Automatic detection and recognition of objects in images code,

code_A, Slides_for_demo, Report2018Project25_30 Report2018Project25_31 slides_30 slides_25_31 LinkReview

Julia Demidova

Ivan Razumov Vladislav Tominin Yaroslav Tominin Nikita Dudorov Leonid Erlygin Proshutinsky Dmitry Baimakov Vladimir Zubkov Alexander Chernenkova Elena

26 Location determination by accelerometer signals Code,

LinkReview, Slides, Text

Elvira Zainulina

Fateev Dmitry Vitaly Protasov Nikita Bozhedomov

28 Multimodelling as a universal way to describe a general sample Code,

Linkreview, Slides, Report

Vladimir Kachanov

Evgenia Strelkova

29 Cross-Language Document Extractive Summarization with Neural Sequence Model Code,

Linkreview, Report, Slides

Pavel Zakharov

Pavel Kvasha Evgeny Dyachkov Evgeny Petrov Ilya Selnitsky

31 Pairwise energy matrix construction for inverse folding problem Code,

LinkReview Report Slides

Rubinshtein Alexander
32 Smooth orientation-dependent scoring function Code

Отчёт

Noskova Elizaveta

Kachkov Sergey Sidorenko Anton

Task 5

  • Name: Finding paraphrases.
  • Task: Paraphrases are different variations of the same and the same text, identical in meaning, but differing lexically and grammatically, for example: "Where did the car go" and "Which direction did the car go". The task of detecting paraphrases is to select clusters in a set of texts, such that each cluster contains only paraphrases of the same and the same sentence.

The easiest way to extract paraphrases is to cluster texts, where each text is represented by a "bag of words".

  • . Data: There are open datasets of questions for testing and training on kaggle.com, there are open datasets for testing from semeval conferences.
  • References:
    1. Will be later
  • Basic algorithm: Use one of the document clustering algorithms to extract paraphrases, where each document is represented by a bag of words or tf-idf.
  • Solution: Use neural network architectures to search for paraphrases, use phrases extracted with parsers as features, use multilevel clustering.
  • Novelty: Lack of implementations for the Russian language that will use parsers for a similar task, all current solutions are quite "simple".
  • Authors: Artyom Popov.

Task 6

  • Name: On conformational changes of proteins using collective motions in torsion angle space and L1 regularization.
  • Task: Torsion angles are the most natural degrees of freedom for describing motions of polymers, such as proteins. This is because bond lengths and bond angles are heavily constrained by covalent forces. Thus, multiple attempts have been done to describe protein dynamics in the torsion angle space. For example, one of us has developed an elastic network model (ENM) [1] in torsion angle space called Torsional Network Model (TNM) [2]. Functional conformational changes in proteins can be described in the Cartesian space using just a subset of collective coordinates [3], or even a sparse representation of these [4]. The latter requires a solution of a LASSO optimization problem [5]. The goal of the current project is to study if a sparse subset of collective coordinates in the torsion subspace can describe functional conformational changes in proteins. This will require a solution of a ridge regression problem with a L1 regularization constraint. The starting point will be the LASSO formulation.
  • . Data: Experimental conformations will be extracted from the Protein Docking Benchmark v5 (https://zlab.umassmed.edu/benchmark/) and a few others. The TNM model can be downloaded from https://ub.cbm.uam.es/tnm/tnm_soft_main.php
  • References:
    1. Tirion MM. (1996) Large Amplitude Elastic Motions in Proteins from a Single-Parameter, Atomic Anal- ysis. Phys Rev Lett. 77:1905–1908.
    2. Mendez R, Bastolla U. (2011) Torsional network model: normal modes in torsion angle space better correlate with conformation changes in proteins. Phys Rev Lett. 2010 104:228103.
    3. SwarmDock and the use of normal modes in protein-protein docking. IH Moal, PA Bates - International journal of molecular sciences, 2010
    4. Modeling protein conformational transition pathways using collective motions and the LASSO method. TW Hayes, IH Moal - Journal of chemical theory and computation, 2017
    5. https://en.wikipedia.org/wiki/Lasso_(statistics)
    6. E. Frezza, R. Lavery, Internal normal mode analysis (iNMA) applied to protein conformational flexibility, Journal of Chemical Theory and Computation 11 (2015) 5503–5512.
  • Basic algorithm: The starting point will be a combination of methods from references 2 and 4. It has to be a LASSO formulation with the direction vectors reconstructed from the internal coordinates. The quality will be computed based on the RMSD measure between the prediction and the solution on several benchmarks. Results will be presented with statistical plots (see examples in references 3-4.
  • Novelty: This is an important and open question in computational structural bioinformatics - how to efficiently represent transitions between protein structures. Not much has been done in the torsional angle subspace (internal coordinates)[6] and nearly nothing has been done using L1 regularization [4].
  • Authors: Ugo Bastolla on the torsional subspace (https://ub.cbm.uam.es/home/ugo.php), Sergei Grudinin on L1 minimization (https://team.inria.fr/nano-d/team-members/sergei-grudinin/)

Task 10

  • Name: Comparison of neural network and continuous-morphological methods in the problem of text detection (Text Detection).
  • Task: Automatically Detect Text in Natural Images.
  • Data: Synthetic generated data + prepared sample of photos + COCO-Text dataset + Конкурс Avito 2014.
  • References:: COCO benchmark, One of a state-of-the-art architecture
  • Basic algorithm: code + morphological methods, Avito 2014 winner’s solution.
  • Solution: It is proposed to compare the performance of several state-of-the-art algorithms that need a large training set with morphological methods that require a small amount of data. It is proposed to determine the limits of applicability of certain methods.
  • Novelty: propose an algorithm based on the use of both neural network and morphological methods (solution of the word detection problem).
  • Authors: I. N. Zharikov.
  • Expert: L. M. Mestetsky (morphological methods).

Task 16

  • Name: Estimate of the optimal sample size for research in medicine
  • Task: In conditions of an insufficient number of expensive measurements, it is required to predict the optimal size of the replenished sample.
  • Data: Samples of measurements in medical diagnostics, in particular, a sample of immunological markers.
  • References::
  • Basic algorithm: A series of empirical sample size estimation algorithms.
  • Solution: Investigation of the properties of the parameter space when replenishing the sample.
  • Novelty: A new methodology for sample size forecasting is proposed, justified in terms of classical and Bayesian statistics.
  • Authors: A.M. Katrutsa, Strizhov V.V., coordinator Tamaz Gadaev

Task 19

  • Name: Study of the dependence of the quality of recognition of ontological objects on the depth of hyponymy.
  • Task: It is necessary to investigate the dependence of the quality of recognition of ontological objects at different levels of concept hyponymy. The classic formulation of the problem of named entity recognition: https://en.wikipedia.org/wiki/Named-entity_recognition
  • Data: Hyponyms from https://wordnet.princeton.edu/ , texts from different domains presumably from WebOfScience.
  • References: Relevant articles for classical staging http://arxiv-sanity.com/search?q=named+entity+recognition
  • Basic algorithm: https://arxiv.org/pdf/1709.09686.pdf or its simplified version can be used as an algorithm, studies are performed using the DeepPavlov library.
  • Solution: It is necessary to collect a dataset of hyponymy (nesting of concepts) of objects using WordNet, to automatically mark up ontological objects of texts of various domains for several levels of generalization of concepts, to conduct a series of experiments to determine the quality of recognition of ontological objects for different levels of nesting.
  • Novelty: Similar studies have not been carried out, there are no ready-made datasets with a hierarchical markup of objects. Recognition of ontological objects at various levels of hyponymy can be used to produce additional features when solving various NLP (Natural language processing) tasks, as well as determining whether objects are a hyponym-hypernym pair.
  • Authors: Burtsev Mikhail Sergeevich (Expert), Baimurzina Dilyara Rimovna (consultant).

Task 20

  • Name: Сравнение качества end-to-end обучаемых моделей в задаче ответа на вопросы в диалоге с учетом контекста
  • Task: Задан фрагмент текста and несколько последовательных вопросов. Ответы на первые n вопросов известны. Нужно сформировать ответ на n+1 вопрос. В качестве ответа нужно указать непрерывный промежуток в тексте заданного фрагмента текста (номера начального and конечного слов). При оценке качества ответа Task сводится к классификации символов фрагмента на класс 0 (не входит в ответ) and 1 (входит в ответ).
  • Data: Предоставляется размеченный датасет с фрагментами текста and наборами вопросов с ответами в диалоге
  • References: Статья Bi-directional Attention Flow for Machine Comprehension (BiDAF2017) описывает end-to-end модель ответов на вопросы по фрагменту без учета контекста диалога. Статья QuAC: Question Answering in Context (QuAC2018) описывает набор данных, содержит описание используемого базового алгоритма с учетом контекста диалога. Статьи с описанием других моделей вопрос-ответных систем (R-Net, DrQA)
  • Basic algorithm: Basic algorithm описан статьях and реализован (QuAC2018, BiDAF2017).
  • Solution: Предлагается изучить механизмы учета контекста (k-ctx, append, etc) and исследовать возможность их добавления в другие модели (DrQA, R-NET), либо предложить собственные для повышения качества по мере F1. Для изучения поведения модели используется визуализация внимания (attention visualization), обучаемых эмбеддингов, а также анализ ошибочных ответов. Предоставляется доступ к вычислительным ресурсам, используемые фреймворки: TensorFlow, PyTorch или Keras.
  • Novelty: Исследование проводится на новом датасете, для которого на данный момент имеется только Basic algorithm. Подтверждение повышения качества от применения механизмов учета контекста диалога в других моделях указывает на применимость предлагаемых подходов для решения более широкого круга задач.
  • Authors: Антон Сергеевич Хританков

Task 21

  • Name: High order convex optimization methods
  • Task: High-order methods are effectively (up to n ~ 10^3 sometimes even up to n ~ 10^4) used for convex problems of not very large dimensions. Until recently, it was generally accepted that these are second-order methods (using the second derivatives of the function being optimized). However, at the beginning of 2018 Yu.E. Nesterov [1] proposed an efficient third-order method in the theory, which works according to almost optimal estimates. In the manual [3] in exercise 1.3, an example of a "bad" convex function proposed by Yu.E. Nesterov, on which I would like to compare the Nesterov method of the second and third order [1], the method from [2] of the second and third order and the usual fast gradient methods (of the first order). It is worth comparing both by the number of iterations and by the total running time.
  • References:
  1. https://alfresco.uclouvain.be/alfresco/service/guest/streamDownload/workspace/SpacesStore/aabc2323-0bc1-40d4-9653-1c29971e7bd8/coredp2018_05web.pdf?guest=true
  2. https://arxiv.org/pdf/1809.00382.pdf
  3. https://arxiv.org/pdf/1711.00394.pdf
  • Author: Evgenia Alekseevna Vorontsova (Associate Professor of Far Eastern Federal University, Vladivostok), Alexander Vladimirovich Gasnikov

Task 22

  • Name: Cutting plane methods for copositive optimization
  • Task: Conic program over the copositive cone (copositive program) min <C,X> : <A_i,X> = b_i, X \in \Pi_i C^k_i, k_i <= 5 A linear function is minimized over the intersection of an affine subspace with a product of copositive cones of orders k_i <= 5. Подробнее тут
  • Data: The algorithm will be tested on randomly generated instances
  • References:
    • [1] Peter J. C. Dickinson, Mirjam Dür, Luuk Gijben, Roland Hildebrand. Scaling relationship between the copositive cone and Parrilo’s first level approximation. Optim. Lett. 7(8), 1669—1679, 2013.
    • [2] Stefan Bundfuss, Mirjam Dür. Algorithmic copositivity detection by simplicial partition. Linear Alg. Appl. 428, 1511—1523, 2008.
    • [3] Mirjam Dür. Copositive programming — a Survey. In Recent advances in Optimization and its Applications in Engineering, Springer, pp. 3-20, 2010.
  • Basic algorithm: The reference algorithm is described in [4] Stefan Bundfuss, Mirjam Dür. An Adaptive Linear Approximation Algorithm for Copositive Programs. SIAM J. Optim., 20(1), 30-53, 2009.
  • Solution: The copositive program will be solved by a cutting plane algorithm. The cutting plane (in the case of an infeasible iterate) will be constructed from the semidefinite representation of the diagonal 1 section of the cone proposed in [1]. The algorithm will be compared to a simplicial division method proposed in [2], [4]. General information about copositive programs and their applications in optimization can be found in [3] .
  • Novelty: The proposed algorithm for optimization over copositive cones up to order 5 uses an exact semi-definite representation. In contrast to all other algorithms existing today the generation of cutting planes is non-iterative.
  • Автор: Roland Hildebrand

Task 23

  • Name: Fractal analysis and synthesis of optical images of sea waves
  • Task: A variety of physical processes and phenomena are studied with the help of images obtained remotely. An important task is to obtain adequate information about the processes and phenomena of interest by measuring certain image characteristics. Lines of equal brightness (isolines) on the images of many natural objects are fractal, that is, they are sets of points that cannot be represented by lines of finite length and occupy an intermediate position between lines and two-dimensional flat figures. Such sets are characterized by the fractal dimension D, which generalizes the classical concept of the dimension of a set and can take fractional values. For a solitary point on the image D=0, for a smooth curve D=1, for a flat figure D=2. The fractal isoline has the dimension 1<D<2. The algorithm for calculating D is given, for example, in [1]. The fractal dimension of the sea surface isolines can serve to estimate the spatial spectra of sea waves according to remote sensing data [1]. Task is as follows. It is necessary to conduct a numerical study of the relationship between the characteristics of the spatial spectra of sea waves and the fractal dimension of satellite images of the Earth in the solar glare region. For the study, the method of numerical synthesis of optical images of sea waves, described in [2], should be used. Numerical modeling should be done with different characteristics of sea waves, as well as with different positions of the Sun and spatial resolution of images.
  • References:
    1. Lupyan E. A., Murynin A. B. Possibilities of fractal analysis of optical images of the sea surface. // Preprint of the Space Research Institute of the Academy of Sciences of the USSR Pr.-1521, Moscow, 1989, 30 p.
    2. Murynin A. B. Reconstruction of the spatial spectra of the sea surface from optical images in a nonlinear model of the brightness field // Research of the Earth from Space, 1990. No. 6. P. 60-70.
  • Author: Ivan Alekseevich Matveev

Task 24

  • Name Entropy maximization for various types of image transformations
  • Task: Pansharpening is an algorithm for upscaling multispectral images using a reference image. The task of pansharpening is formulated as follows: having a panchromatic image of the required resolution and a multispectral image of reduced resolution, it is required to restore the multispectral image in the spatial resolution of the panchromatic one. From empirical observations based on a large number of high-resolution images, it is known that the spatial variability of the reflected radiation intensity for objects of the same nature is much greater than the variability of their spectrum. In other words, one can observe that the spectrum of reflected radiation is homogeneous within the boundaries of one object, while even within one object the intensity of reflected radiation varies. In practice, good results can be achieved using a simplified approach, in which it is assumed that if the intensity of neighboring regions differ significantly, then these regions probably belong to different objects with different reflected spectra. This is the basis for the developed probabilistic algorithm for increasing the resolution of multispectral images using a reference image [1]
  • It is necessary to conduct a study on maximizing the entropy for various types of transformations on the image. Show that entropy can serve as an indicator of the loss of information contained in the image during transformations over it. Formulation of the inverse problem for image restoration: Condition 1: Correspondence of the intensity (at each point) of the restored image with the intensity of the panchromatic image. Condition 2: Correspondence of the low-frequency component of the reconstructed image with the original multispectral image. Condition 3: Homogeneity (similarity) of the spectrum within one object and the assumption of an abrupt change in the spectrum at the border of two homogeneous regions. Condition 4: Under the first three conditions, the local entropy of the reconstructed image must be maximized.
  • References:
    1. Gorohovsky K. Yu., Ignatiev V. Yu., Murynin A. B., Rakova K. O. Search for optimal parameters of a probabilistic algorithm for increasing the spatial resolution of multispectral satellite images // Izvestiya RAN. Theory and control systems, 2017, No. 6.
  • Author: Ivan Alekseevich Matveev

Task 25

  • Name: Automatic detection and recognition of objects in images
  • Task: Automatic detection and recognition of objects in images and videos is one of the main tasks of computer vision. As a rule, these tasks are divided into several subtasks: preprocessing, extraction of the characteristic properties of the object image and classification. The pre-processing stage usually includes some operations on the image such as filtering, brightness equalization, geometric corrective transformations to facilitate robust feature extraction.

The characteristic properties of an image of an object are understood as a set of features that approximately describe the object of interest. Features can be divided into two classes: local and integral. The advantage of local features is their versatility, invariance with respect to uneven changes in brightness and illumination, but they are not unique. Integral features that characterize the image of the object as a whole are not resistant to changes in the structure of the object and difficult lighting conditions. There is a combined approach - the use of local features as elements of an integral description, when the desired object is modeled by a set of areas, each of which is characterized by its own set of features - a local texture descriptor. The totality of such descriptors characterizes the object as a whole. Classification is understood as determining whether an object belongs to a particular class by analyzing the feature vector obtained at the previous stage, dividing the feature space into subdomains indicating the corresponding class. There are many approaches to classification: neural network, statistical (Bayesian, regression, Fisher, etc.), decision trees and forests, metric (nearest K-neighbors, Parzen windows, etc.) and nuclear (SVM, RBF, method of potential functions), compositional (AdaBoost). For the task of detecting an object in an image, membership in two classes is evaluated - the class of images containing the object, and the class of images that do not contain the object (background images).

Task 29

  • Name: Cross-Language Document Extractive Summarization with Neural Sequence Model.
  • Task: It is proposed to solve the transfer learning problem for the text reduction model by extractive summarization and to investigate the dependence of the quality of text reduction on the quality of training of the translation model. Having data for training the abbreviation model in English and a parallel English-Russian corpus of texts, build a model for abbreviating the text in Russian. The solution of the problem is evaluated on a small set of data for testing the model in Russian, the quality of the solution to the problem is determined by the ratio of the values of the ROUGE criteria in English and Russian sets.
  • Data: Data for training the model in English (SummaRuNNer2016), OPUS parallel corpus, data for verification in Russian.
  • References: The article (SummaRuNNer2016) describes the basic text reduction algorithm, the work Neural machine translation by jointly learning to align and translate.(NMT2016) describes the translation model. The idea of sharing models is presented in Cross-Language Document Summarization Based on Machine Translation Quality Prediction (CrossSum2010).
  • Basic algorithm: One idea of the basic algorithm is presented in (CrossSum2010), a translation model is implemented (OpenNMT), an implementation of a text reduction model is provided (SummaRuNNer2016).
  • Solution: It is suggested to explore the solution idea proposed in the article (CrossSum2010) and options for combining reduction and translation models. Basic models and dataset preprocessing implemented (OpenNMT), PyTorch and Tensorflow libraries. Analysis of text reduction errors is performed as described in (SummaRuNNer2016), analysis of the quality of model training by standard library tools, .
  • Novelty: For the base model, the applicability was investigated on a couple of datasets, confirming the possibility of transferring training to a dataset in another language and specifying the conditions for this transfer will expand the scope of the model and indicate the necessary new refinements of the model or data preprocessing.
  • Authors: Alexey Romanov (consultant), Anton Khritankov (Expert).

Task 30

  • Name: Method for constructing an HG-LBP descriptor based on gradient histograms for pedestrian detection.
  • Task: It is proposed to develop a new descriptor that generalizes the LBP descriptor based on histograms of gradient modules, having HOG-LBP composition properties for the task of detecting pedestrians in an image. As an analysis of the quality of a new descriptor, it is proposed to use FAR/FRR detection error plots based on INRIA.
  • Data: INRIA pedestrian database: http://pascal.inrialpes.fr/data/human/
  • References:
    1. 1. T. Ojala and M. Pietikainen. Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns, IEEE Trans on Pattern Analysis and Machine Intelligence, Vol. 24. No. 7, July, 2002.
    2. 2. T. Bouwmans, C. Silva, C. Marghes, M. Zitouni, H. Bhaskar, C. Frelicot, "On the Role and the Importance of Features for Background Modeling and Foreground Detection", https:// arxiv.org/pdf/1611.09099v1.pdf
    3. 3. N. Dalal and B. Triggs, Histograms of Oriented Gradients for Human Detection, Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2005, pp.886-893
    4. 4. T. Ahonen, A. Hadid, M. Pietikainen Face Description with Local Binary Patterns: Application to Face Recognition \\ IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume:28 , Issue: 121.
    5. 5. http://www.magicandlove.com/blog/2011/08/26/people-detection-in-opencv-again/
    6. 6. http://www.cse.oulu.fi/CMV/Downloads/LBPMatlab2.
    7. 7. http://www.mathworks.com/help/vision/ref/extractlbpfeatures.html3.
    8. 8. http://www.codeproject.com/Articles/741559/Uniform-LBP-Features-and-Spatial-Histogram-Computa4.
    9. 9. http://www.cse.oulu.fi/CMV/Research
  • Basic algorithm: Xiaoyu Wang, Tony X. Han, Shuicheng Yan. An HOG-LBP Human Detector with Partial Occlusion Handling \\ ICCV 2009
  • Solution: One of the options for generalizing LBP can be to use instead of histograms of distribution of points by LBP code, histograms of distribution of modules of point gradients in a block by LBP code (HG-LBP). It is proposed to use the OpenCV library for the basis of experiments, in which the HOG and LBP algorithms are implemented. It is necessary to modify the source code of the LBP implementation and insert the calculation of the modules of the gradient and the accumulation of the corresponding histogram over the LBP. It is necessary to write a program for reading the INRIA base, learning the linear SVM method on the original and modified descriptors, collecting detection statistics and plotting FAR/FRR DET plots.
  • Novelty: The development of computationally simple methods for extracting the most informative features in recognition tasks is relevant in the field of creating embedded systems with low computing resources. Replacing the composition of descriptors with one that is more informative than each individually can simplify the solution of the problem. The use of gradient values in LPB descriptor histograms is new.
  • Authors: Gneushev Alexander Nikolaevich

Task 31

  • Name: Using the HOG descriptor to train a neural network in a pedestrian detection task
  • Task: It is proposed to replace the linear SVM classifier in the classical HOG algorithm with a simple convolutional neural network of small depth, while the HOG descriptor should be represented by a three-dimensional tensor that preserves the spatial structure of local blocks. As an analysis of the quality of a new descriptor, it is proposed to use FAR/FRR detection error plots based on INRIA.
  • Data: INRIA pedestrian database: http://pascal.inrialpes.fr/data/human/
  • References:
    1. 1. N. Dalal and B. Triggs, Histograms of Oriented Gradients for Human Detection, Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2005, pp.886-893
    2. 3. Q. Zhu, S. Avidan, M.-C. Yeh, and K.-T. Cheng. Fast human detection using a cascade of histograms of oriented gradients. In CVPR, pages 1491-1498, 2006 O. Tuzel, F. Porikli, and P. Meer. Human detection via classification on riemannian manifolds. In CVPR, 2007
    3. 4. P. Dollar, C. Wojek, B. Schiele and P. Perona Pedestrian Detection: An Evaluation of the State of the Art / IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol 34. Issue 4, pp . 743-761
    4. 5. Xiaoyu Wang, Tony X. Han, Shuicheng Yan, An HOG-LBP Human Detector with Partial Occlusion Handling, ICCV 2009 http://www.xiaoyumu.com/s/PDF/Wang_HOG_LBP.pdf
    5. 6. https://en.wikipedia.org/wiki/Pedestrian_detection
    6. 7. HOG person detector tutorial https://chrisjmccormick.wordpress.com/2013/05/09/hog-person-detector-tutorial/
    7. 8. NavneetDalalThesis.pdf Navneet Dalal. Finding People in Images and Videos. PhD Thesis. Institut National Polytechnique de Grenoble / INRIA Rhone-Alpes, Grenoble, July 2006)
    8. 9. People Detection in OpenCV http://www.magicandlove.com/blog/2011/08/26/people-detection-in-opencv-again/
    9. 10. Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
  • Basic algorithm:
    1. 1. N. Dalal and B. Triggs, Histograms of Oriented Gradients for Human Detection, Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2005, pp.886-893
    2. 2. Xiaoyu Wang, Tony X. Han, Shuicheng Yan, An HOG-LBP Human Detector with Partial Occlusion Handling, ICCV 2009
  • Solution: One of the options for generalizing the HOG algorithm can be to use another classifier instead of the linear SVM algorithm, for example, some kind of neural network. It is proposed to use the OpenCV library for the basis of experiments, which implements the HOG algorithm and the SVM classifier. It is necessary to analyze the source code of the HOG implementation, formalize the internal structure of the descriptor HOG vector in the form of a three-dimensional tensor — two spatial and one spectral dimensions. It is necessary to write a program for reading the INRIA base, learning the linear SVM method on HOG descriptors from it, collecting detection statistics and plotting FAR/FRR DET plots. Based on some neural network training system (for example, mxnet), it is necessary to assemble a shallow (no more than 2-3 convolutional layers) convolutional neural network of known architecture, train it on the basis of INRIA and on HOG tensor descriptors, build the corresponding FAR / FRR graphs.
  • Novelty: The development of computationally simple methods for extracting the most informative features in recognition tasks is relevant in the field of creating embedded systems with low computing resources. Using a small number of the most informative descriptors can reduce computational complexity compared to using a large composition of simple features, such as in a deep convolutional neural network. Typically, classifiers use the HOG descriptor as a vector as a whole, however, information about the local spatial structure and feature spectrum is lost. The novelty lies in the use of the block locality property in the HOG descriptor and the representation of the HOG as a 3D tensor. The use of this information makes it possible to achieve detection resistance to pedestrian overlap.
  • Authors: Gneushev Alexander Nikolaevich

YEAR

Author Topic Links Consultant Reviewer Report Letters \Sigma=3+13
Goncharov Alexey (example) Metric classification of time series code,

paper, slides

Maria Popova Zadayanchuk Andrey BMF AILSBRCVTDSWH>
Astakhov Anton Restoring the structure of a predictive model from a probabilistic representation folder

code paper

Alexander Katrutsa Kislinsky Vadim BHF A-I-L0S0B0R0C0V0T0 [A-I-L-S-B0R0C0V0T0E0D0W0S] + [AILSBRCBTEDWS] 2+4
Gavrilov Yuri Choice of Interpreted Multimodels in Credit Scoring Tasks folder

code paper video

Goncharov Alexey Ostroukhov Petr BF A+IL-S0B-R0 [A+ILSBRC-VT0E0D0W0S] + (W) 2+9+1
Gadaev Tamaz Estimating the optimal sample size folder

code paper slides video

Alexander Katrutsa Shulgin Egor BHF A-IL>SB-R-C0V0T0 [AILSBR0CVT0E-D0W0S] 2+9
Gladin Egor Accelerometer Battery Savings Based on Time Series Forecasting folder

code paper slides

Maria Vladimirova Kozlinsky Evgeny

review

.F AILS [A-I-L-SB0R0C000V0T0E0D0W0S] 1+4
Grabovoi Andrey Automatic determination of the relevance of neural network parameters. folder

code paper slides video

Oleg Bakhteev Kulkov Alexander BHMF A+ILS+BRC+VTE>D> [AILSBRCVTEDWS] [\emptyset] 3+13
Nurlanov Zhakshylyk Deep Learning for reliable detection of tandem repeats in 3D protein structures folder

code paper slides video

S. V. Grudinin, Guillaume Pages Pletnev Nikita

Review

BHF AILB [A-I-LS-BRC0V0T-E0D0W0S] 2+7
Rogozina Anna Deep learning for RNA secondary structure prediction folder

code paper slides video

Maria Popova Gadaev Tamaz BHMF AILSBR> [AILSBRC0V0T0E0D0W0S]+CW 3+9
Terekhov Oleg Generation of features using locally approximating models folder

code paper slides

S.D. Ivanychev, R.G. Neichev Gladin Egor

review

BHM AILSBRCVTDSW [AIL0SB0R0C0V0TE0D0W0S] 2+12
Shulgin Egor Generation of features that are invariant to changes in the frequency of the time series folder

code paper

R.G. Neichev Terekhov Oleg BHM AIL [AI-LS-BR0CV0T0E0D0W0S] 2+5
Malinovsky Grigory Graph Structure Prediction of a Neural Network Model folder

code paper slides video

Oleg Bakhteev Grabovoi Andrey

review

BHMF A+I+L+SBR>C>V>T>E>D> [AILSBRC0VTED0WS]+(C) 3+11
Kulkov Alexander Brain signal decoding and intention prediction folder

code paper slides video

R.V. Isachenko Malinovsky Grigory

review

BHMF AILSBR [AILSBRCVTED0W0S] 3+11
Pletnev Nikita Approximation of the boundaries of the iris paper

slides [ video]

Alexander Aduenko Nurlanov Zhakshylyk BF AILSB>R> [AILSTWS] 2+7
Ostroukhov Petr Selection of models superposition for identification of a person on the basis of a ballistocardiogram folder

paper code slides

Alexander Prozorov Gavrilov Yuri

review

BhF AIL>S?B?R? [AILSBRCVT-E0D0W0S] 2+10
Kislinsky Vadim Predicting user music playlists in a recommender system. folder

code slides paper video

Evgeny Frolov Astakhov Anton .F (AIL)------(SB)---(RCVT)-- [AILS-BRCVTED0W0S] 1+11
Kozlinsky Evgeny Analysis of banking transactional data of individuals to identify customer consumption patterns. folder

code paper slides video

Rosa Aisina Rogozina Anna

review

BHMF AILSBR>CV> [AILSBR0C0V0TE0D0WS]+(С) 3+8+1

Task 1

  • Name: Approximation of the boundaries of the iris
  • Task: Based on the image of the human eye, determine the circles approximating the inner and outer border of the iris.
  • Data: Bitmap monochrome images, typical size 640*480 pixels (however other sizes are possible)[49], [50].
  • References::
    • Aduenko A.A. Selection of multi-models in Tasks classification (supervisor Strizhov V.V.). Moscow Institute of Physics and Technology, 2017. [51]
    • K.A. Gankin, A.N. Gneushev, I.A. Matveev Segmentation of the iris image based on approximate methods with subsequent refinements // Izvestiya RAN. Theory and control systems, 2014, no. 2, p. 78–92.
    • Duda, R. O. Use of the Hough transformation to detect lines and curves in pictures / R. O. Duda, P. E. Hart // Communications of the ACM. 1972 Vol. 15, no. 1.Pp.
  • Basic algorithm: Efimov Yury. Search for the outer and inner boundaries of the iris in the eye image using the paired gradient method, 2015.
  • Solution: See iris_circle_problem.pdf
  • Novelty: A fast non-enumerative algorithm for approximating boundaries using linear multimodels is proposed.
  • consultant: Alexander Aduenko (by Strizhov V.V., Expert Matveev I.A.)

Task 2

  • Name: Estimated optimal sample size
  • Task: In conditions of an insufficient number of expensive measurements, it is required to predict the optimal size of the replenished sample.
  • Data: Samples of measurements in medical diagnostics, in particular, a sample of immunological markers.
  • References::
  • Basic algorithm: Sample size estimation algorithms for .
  • Solution: Investigation of the properties of the parameter space when replenishing the sample.
  • Novelty: A new methodology for sample size forecasting is proposed, justified in terms of classical and Bayesian statistics.
  • Authors: A.M. Katrutsa, Strizhov V.V., Expert A.P. Motrenko

Task 3

  • Name: Restoring the structure of the prognostic model from a probabilistic representation
  • Task: It is required to reconstruct the superposition tree from the generated connection probability graph.
  • Data: Segments of time series, spatio-temporal series (and text collections).
  • References::
    • Works by Tommy Yakkola and others at LinkReview [53].
  • Basic algorithm: Branch and bound method, dynamic programming when building a fully connected graph.
  • Solution: Building a model in the form of GAN, VAE generates a weighted graph, NN approximates a tree structure.
  • Novelty: Suggested a way to penalize a graph for not being a tree. A method for predicting the structures of prognostic models is proposed.
  • Authors: A.M. Katrutsa, Strizhov V.V.

Task 4

  • Name: Text recognition based on skeletal representation of thick lines and convolutional networks
  • Task: It is required to build two CNNs, one recognizes a bitmap representation of an image, the other a vector one.
  • Data: Bitmap fonts.
  • References:: List of works [54], in particular arXiv:1611.03199 and
  • Basic algorithm: Convolution network for bitmap.
  • Solution: It is required to propose a method for collapsing graph structures, which allows generating an informative description of the skeleton of a thick line.
  • Novelty: A way to improve the quality of recognition of thick lines due to a new way of generating their descriptions is proposed.
  • Authors: L.M. Mestetsky, I.A. Reyer, Strizhov V.V.

Task 5

  • Name: Generation of features using locally approximating models
  • Task: It is required to test the feasibility of the hypothesis of simplicity of sampling for the generated features. Features are the optimal parameters of approximating models. Moreover, the entire sample is not simple and requires a mixture of models to approximate it. Explore the information content of the generated features - the parameters of the approximating models trained on the segments of the original time series.
  • Data:
    • WISDM (Kwapisz, J.R., G.M. Weiss, and S.A. Moore. 2011. Activity recognition using cell phone accelerometers. ACM SigKDD Explorations Newsletter. 12(2):74–82.), USC-HAD or higher. Accelerometer data (Human activity recognition using smart phone embedded sensors: A Linear Dynamical Systems method, W Wang, H Liu, L Yu, F Sun - Neural Networks (IJCNN), 2014)
    • (Time series (examples library), Accelerometry section).
  • References::
    • Kuznetsov M.P., Ivkin N.P. Algorithm for Classifying Accelerometer Time Series by Combined Feature Description // Machine Learning and Data Analysis. 2015. V. 1, No. 11. C. 1471-1483. [55]
    • Karasikov M.E., Strizhov V.V. Classification of time series in the space of parameters of generating models // Informatics and its applications, 2016.URL
    • Kuznetsov M.P., Ivkin N.P. Algorithm for Classifying Accelerometer Time Series by Combined Feature Description // Machine Learning and Data Analysis. 2015. V. 1, No. 11. C. 1471 - 1483. URL
    • Isachenko R.V., Strizhov V.V. Metric learning in Taskx multiclass classification of time series // Informatics and its applications, 2016, 10(2) : 48-57. URL
    • Zadayanchuk A.I., Popova M.S., Strizhov V.V. Choosing the optimal model for classifying physical activity based on accelerometer measurements // Information technologies, 2016. URL
    • Motrenko A.P., Strijov V.V. Extracting fundamental periods to segment human motion time series // Journal of Biomedical and Health Informatics, 2016, Vol. 20, no. 6, 1466 - 1476.
    • Ignatov A., Strijov V. Human activity recognition using quasiperiodic time series collected from a single triaxial accelerometer // Multimedia Tools and Applications, 2015, 17.05.2015 : 1-14. URL
  • Basic algorithm: Described by Kuznetsov, Ivkin.
  • Solution: It is required to build a set of locally approximating models and choose the most adequate ones.
  • Novelty: A standard for building locally approximating models has been created.
  • Authors: S.D. Ivanychev, R.G. Neichev, Strizhov V.V.

Task 6

  • Name: Brain signal decoding and intention prediction
  • Task: It is required to build a model that restores the movement of the limbs from the corticogram.
  • Data: neurotycho.org [56]
  • References::
    • Neichev R.G., Katrutsa A.M., Strizhov V.V. Selection of the optimal set of features from a multicorrelated set in the forecasting problem. Zavodskaya Lab. Diagnostics of materials, 2016, 82(3) : 68-74. [57]
    • MLAlgorithms: Motrenko, Isachenko (submitted)
  • Basic algorithm: Partial Least Squares[58]
  • Solution: Create a feature selection algorithm alternative to PLS and taking into account the non-orthogonal structure of feature interdependence.
  • Novelty: A feature selection method is proposed that takes into account the regularities of both the and independent variable and the dependent variable.
  • Authors: R.V. Isachenko, Strizhov V.V.

Task 7

  • Name: Automatic determination of the relevance of neural network parameters.
  • Task: The Task of finding a stable (and not redundant in terms of parameters) neural network structure is considered. To cut off redundant parameters, it is proposed to introduce a priori probabilistic assumptions about the distribution of parameters and remove non-informative parameters from the neural network using the Belsley method. To adjust the prior distribution, it is proposed to use gradient methods.
  • Data: A selection of handwritten MNIST digits
  • Basic algorithm: Optimal Brain Damage, decimation based on variance inference. The structure of the final model is proposed to be compared with the model obtained by the AdaNet algorithm.
  • References::
    • [59] Gradient hyperparameter optimization methods.
    • [60] Gradient hyperparameter optimization methods.
    • [61] Optimal Brain Damage.
    • [62] AdaNet
    • [63] Belsley Method
  • Authors: Oleg Bakhteev, Strizhov V.V.

Task 8

  • Name: Prediction of the graph structure of the neural network model.
  • Task: The Task is considered to find a stable (and non-redundant in terms of parameters) structure of a convolutional neural network. It is proposed to predict the structure of a neural network using doubly-recurrent neural networks. As a training sample, it is proposed to use the structures of models that have shown good quality on subsamples of small power.
  • Data: Samples MNIST, CIFAR-10
  • Basic algorithm: random search. Comparison with work on reinforcement learning is possible.
  • References::
    • [64] doubly-recurrent neural networks.
    • [65] Similar approach using reinforcement learning.
  • Authors: Oleg Bakhteev. Strijov V.V.

Task 9

  • Name: Deep Learning for reliable detection of tandem repeats in 3D protein structures more in PDF
  • Task: Deep learning algorithms pushed computer vision to a level of accuracy comparable or higher than a human vision. Similarly, we believe that it is possible to recognize the symmetry of a 3D object with a very high reliability, when the object is represented as a density map. The optimization problem includes i) multiclass classification of 3D data. The output is the order of symmetry. The number of classes is ~10-20 ii) multioutput regression of 3D data. The output is the symmetry axis (a 3-vector). The input data are typically 24x24x24 meshes. The total amount of these meshes is of order a million. Biological motivation : Symmetry is an important feature of protein tertiary and quaternary structures that has been associated with protein folding, function, evolution, and stability. Its emergence and ensuing prevalence has been attributed to gene duplications, fusion events, and subsequent evolutionary drift in sequence. Methods to detect these symmetries exist, either based on the structure or the sequence of the proteins, however, we believe that they can be vastly improved.
  • Data: Synthetic data are obtained by ‘symmetrizing’ folds from top8000 library (http://kinemage.biochem.duke.edu/databases/top8000.php).
  • References:: Our previous 3D CNN: [66] Invariance of CNNs (and references therein): 01630265/document, [67]
  • Basic algorithm: A prototype has already been created using the Tensorflow framework [4], which is capable of detecting the order of cyclic structures with about 93% accuracy. The main goal of this internship is to optimize the topology of the current neural network prototype and make it rotational and translational invariant with respect to input data. [4] [68]
  • Solution: The network architecture needs to be modified according to the invariance properties (most importantly, rotational invariance). Please see the links below [69],

[70] The code is written using the Tensorflow library, and the current model is trained on a single GPU (Nvidia Quadro 4000)of a desktop machine.

  • Novelty: Applications of convolutional networks to 3D data are still very challenging due to large amount of data and specific requirements to the network architecture. More specifically, the models need to be rotationally and transnationally invariant, which makes classical 2D augmentation tricks loosely applicable here. Thus, new models need to be developed for 3D data.
  • Authors: Expert Sergei Grudinin, consultants Guillaume Pages, Strizhov V.V.

Task 10

  • Name: Semi-supervised representation learning with attention
  • Task: training of vector representations using the attention mechanism, thanks to which the quality of machine translation has increased significantly. It is proposed to use it in the encoder-decoder architecture network to obtain vectors of text fragments of arbitrary length.
  • Data: It is proposed to consider two samples: Microsoft Paraphrase Corpus (a small set of proposals, https://www.microsoft.com/en-us/download/details.aspx?id=52398) and PPDB (a set of short segments, not always correct markup. http://sitem.herts.ac.uk/aeru/ppdb/en/)
  • References::

1. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin. Attention Is All You Need (https://arxiv.org/abs/1706.03762). 2. John Wieting, Mohit Bansal, Kevin Gimpel, Karen Livescu. Towards Universal Paraphrastic Sentence Embeddings (https://arxiv.org/abs/1511.08198). 3. Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, Sanja Fidler. Skip Thought Vectors (https://arxiv.org/abs/1506.06726). 4. Keras seq2seq (https://github.com/farizrahman4u/seq2seq).

  • Basic algorithm: solution [3] or vector representations obtained using seq2seq[].
  • Solution: in the task it is proposed to train vector representations for phrases using the attention and partial learning mechanism. As an internal quality functional, it is proposed to use the improved error function from [2]. As an applied problem, we can consider the problem of detecting paraphrases and sentiment analysis. Moreover, based on the results obtained in [1], it can be assumed that the attention mechanism has a greater influence on obtaining universal vectors for phrases than the network architecture. It is proposed to test this hypothesis using two different architectures - a standard recurrent and feed-forward network.
  • Novelty: new method.
  • Authors: Rita Kuznetsova, consultant

Task 11

  • Name: Selection of Interpreted Multi-Models in Credit Scoring Tasks
  • Task: The task of credit scoring is to determine the level of creditworthiness of the borrower. For this, a borrower's questionnaire is used, containing both numerical (age, income) and categorical features (gender, profession). It is required, having historical information about the repayment of loans by other borrowers, to determine whether the borrower will return the loan. The data can be heterogeneous (example, if there are different income regions in a country), and several models will be needed to adequately classify. It is necessary to determine the optimal number of models. Based on the set of model parameters, it is necessary to draw up a portrait of the borrower.
  • Data: It is proposed to consider five samples from the UCI and Kaggle repositories, with a capacity of 50,000 objects or more.
  • References:: A.A. Aduenko \MLAlgorithms\PhDThesis; C. Bishop, Pattern recognition and machine learning, final chapter; 20 years of Mixture experts.
  • Basic algorithm: Clustering and building independent logistic regression models, Adaboost, Decision Forest (with restrictions on complexity), Blend of Experts.
  • Solution: An algorithm is proposed for selecting a multi-model (a mixture of models or a mixture of Experts) and determining the optimal number of models.
  • Novelty: Proposed function of distance between models in which parameter distributions are given on different media.
  • Authors: Goncharov Alexey, Strizhov V.V.

Task 12

  • Name: Generation of features that are invariant to changes in the frequency of the time series.
  • Task: Informally: there is a set of time series of a certain frequency (s1), and the information we are interested in is distinguishable and at a lower sampling rate (in the example, the samples occur every millisecond, and the events of interest to us occur at an interval of 0.1 s). These series are integrated reducing the frequency by a factor of 10 (i.e. every 10 values are simply summed) and a set of time series s2 is obtained. be described in the same way. Formally: Given a set of time series s1, .., sNS with a high sampling rate 1. Target information (example, hand movement/daily price fluctuation/…) is distinguishable and at a lower sampling rate 2 < 1. It is necessary to find such a mapping f: S G, - the frequency of the series, that it will generate similar feature descriptions for series of different frequencies. Those.

f* = argminf E(f1(s1) -f2(s2)) , where E is some error function.

  • Data: Sets of time series of people's physical activity from accelerometers; human EEG time series; time series of energy consumption of cities/industrial facilities. Sample link: UCI repository, our EEG and accelerometer samples.
  • References:: See above for Accelerometers
  • Basic algorithm: Fourier transform.
  • Solution: Building an autoencoder with a partially fixed internal representation as the same time series with a lower frequency.
  • Novelty: For time series, there is no “common approach” to analysis, in contrast, in the example, to image analysis. If you look at the problem abstractly, now the cat is defined as well as and the cat, which takes up half the space in the image. An analogy with time series suggests itself. Moreover, the nature of data in pictures and in time series is similar: in pictures there is a hierarchy between values along two axes (x and y), and in time series - one at a time - along the time axis. The hypothesis is that methods similar to image analysis will provide qualitative results. The resulting feature representation can be further used for classification and prediction of time series.
  • Authors: R. G. Neichev, Strizhov V.V.

Task 18

YEAR

Group 594

Author Topic Link Consultant Reviewer Report Letters \Sigma=3+13
Goncharov Alexey (example) Metric classification of time series code,

paper, slides

Maria Popova Zadayanchuk Andrey BMF AILSBRCVTDSWH>
Belykh Evgeny Proskurin Alexander Classification of superpositions of movements of physical activity paper

slides code

Maria Vladimirova, Alexandra Malkova Romanenko Ilya, Popovkin Andrey, review

video

MF AILSBRC>V> [AILSBRC0VT0E0D0WS] CTD 2+9
Zueva Nadezhda Style Change Detection paper

slides video

Rita Kuznetsova Igashov Ilya, review BHMF AIL-S-B-R- [AILSBRCV0TE0D0WS] 3+10
Igashov Ilya Formulation and solution of an optimization problem combining classification and regression to estimate the binding energy of a protein and small molecules. paper

slides video

Sergei Grudinin, Maria Kadukova Manucharyan Vardan, review, correction BHMF AILBS+BRHC>V> [AILSBRCVTE0D0WS] 3+11
Kalugin Dmitry Graph Structure Prediction of a Neural Network Model paper

slides

Oleg Bakhteev Zueva Nadezhda review BHM AI-L-S--B0R0C0V0 [A-ILSBR0CVT0ED0WS] 2+11
Manucharyan Vardan Prediction of properties and types of atoms in molecular graphs using convolutional networks paper,

slides, code video

Sergei Grudinin, Maria Kadukova Fattakhov Artur review BMF AILS>B> [AILSB0R0CV0TE0D0WS] VED 3+7
Muraviev Kirill Determination of neural network parameters to be optimized. paper,

slides, code video

Oleg Bakhteev Kalugin Dmitry review BHMF A+IL-S-B-RCVTED [AILSBRCV0TE0DWS] 3+12
Murzin Dmitry Danilov Andrey Text recognition based on skeletal representation of thick lines and convolutional networks paper, slides, code

[video]

L. M. Mestetsky, Ivan Reyer, Zharikov I. N. Muraviev Kirill review BHMF A+IL> [AILSB0R0CV0TE0D0WS] 3+8
Popovkin Andrey Romanenko Ilya Creation of ranking models for information retrieval systems. Algorithm for Predicting the Structure of Locally Optimal Models paper

slides code video

Kulunchakov Andrey, Strizhov V.V. Proskurin Alexander, Belykh Evgeny, review BHMF AILS0BC>V> [AILSBRC0VTED0WS] 3+11
Fattakhov Artur Style Change Detection paper

slides code video

Rita Kuznetsova Danilov Andrey, Murzin Dmitry, review BMF AIL-S-B-R-CVTDSWH [AILSBRCVTE0D0WS] 3+11

Task 1 (1-2)

  • Name: Classification of superpositions of movements of physical activity
  • Task: Human behavior analysis by mobile phone sensor measurements: detect human movements from accelerometer data. The accelerometer data is a signal without precise periodicity, which contains an unknown superposition of physical models. We will consider the superposition of models: body + arm/bag/backpack.

Classification of human activities according to measurements of fitness bracelets. According to the measurements of the accelerometer and gyroscope, it is required to determine the type of activity of the worker. It is assumed that the time series of measurements contain elementary movements that form clusters in the space of time series descriptions. (Development: The characteristic duration of movement is seconds. Time series are marked with activity type marks: work, rest. The characteristic duration of activity is minutes. It is required to restore the type of activity by the description of the time series and cluster.)

  • Data:
  • References::
    • Karasikov M. E., Strizhov V. V. Classification of time series in the space of parameters of generating models // Informatics and its applications, 2016. [URL]
    • Kuznetsov M.P., Ivkin N.P. Algorithm for classification of accelerometer time series by combined feature description // Machine Learning and Data Analysis. 2015. T. 1, No. 11. C. 1471-1483. [URL]
    • Isachenko R. V., Strizhov V. V. Metric learning in Tasks of multiclass classification of time series // Informatics and its applications, 2016, 10(2): 48-57. [URL]
    • Zadayanchuk A.I., Popova M.S., Strizhov V.V. Choice of the optimal model for classifying physical activity based on accelerometer measurements // Information technologies, 2016. [URL]
    • Motrenko A.P., Strijov V.V. Extracting fundamental periods to segment human motion time series // Journal of Biomedical and Health Informatics, 2016, Vol. 20, no. 6, 1466-1476. [URL]
    • Ignatov A., Strijov V. Human activity recognition using quasiperiodic time series collected from a single triaxial accelerometer // Multimedia Tools and Applications, 2015, 17.05.2015 : 1-14. [URL]
  • Basic algorithm: Basic algorithm is described in [Karasikov, Strizhov: 2016] and [Kuznetsov, Ivkin: 2014].
  • Solution: Find the optimal segmentation method and optimal description of the time series. Construct a metric space of descriptions of elementary motions.
  • Novelty: A method for classifying and analyzing complex movements is proposed (Development: Connection of two characteristic times of a description of a person's life, combined problem statement.)
  • Authors: Alexandra Malkova, Maria Vladimirova, R. G. Neichev, Strizhov V.V.

Task 2 (1)

Task 3 (1-2)

  • Name: Text recognition based on skeletal representation of thick lines and convolutional networks
  • Task: It is required to build two CNNs, one recognizes a bitmap representation of an image, the other a vector one. (Development: generation of thick lines by neural networks)
  • Data: Bitmap fonts.
  • References:: List of works [71], in particular arXiv:1611.03199 and
  • Basic algorithm: Convolution network for bitmap.
  • Solution: It is required to propose a method for collapsing graph structures, which allows generating an informative description of the skeleton of a thick line.
  • Novelty: A way to improve the quality of recognition of thick lines due to a new way of generating their descriptions is proposed.
  • Authors: L. M. Mestetsky, I. A. Reyer, Strizhov V.V.

Task 4 (1-2)

  • Name: Creation of ranking models for information retrieval systems. Algorithm for Predicting the Structure of Locally Optimal Models
  • Task: It is required to predict a time series using some parametric superposition of algebraic functions. It is proposed not to cost the prognostic model, but to predict it, that is, to predict the structure of the approximating superposition. A class of considered superpositions is introduced, and on the set of such structural descriptions, a search is made for a locally optimal model for the problem under consideration. The task consists in 1) searching for a suitable structural description of the model 2) describing the search algorithm for the structure that will correspond to the optimal model 3) describing the algorithm for inverse construction of the model according to its structural description. For an already existing example of the answer to questions 1-3, see the works of A. A. Varfolomeeva.
  • Data:
    • Collection of text documents TREC (!)
    • A set of time series, which implies the restoration of functional dependencies. It is proposed to first use synthetic data or immediately apply the algorithm to forecasting time series 1) electricity consumption 2) physical activity with subsequent analysis of the resulting structures.
  • References::
    • (!) Kulunchakov A.S., Strijov V.V. Generation of simple structured Information Retrieval functions by genetic algorithm without stagnation // Expert Systems with Applications, 2017, 85: 221–230.
    • A. A. Varfolomeeva Selection of features when marking up bibliographic lists using structural learning methods, 2013, [72]
    • Bin Cao, Ying Li and Jianwei Yin Measuring Similarity between Graphs Based on the Levenshtein Distance, 2012, [73]
  • Basic algorithm: Specifically, there is no basic algorithm for the proposed problem. It is proposed to try to repeat the experiment of A.A. Varfolomeeva for a different structural description in order to understand what is happening.
  • Solution: The superposition of algebraic functions defines an ortree, on the vertices of which the labels of the corresponding algebraic functions or variables are given. Therefore, the structural description of such a superposition can be its DFS-code. This is a string consisting of vertex labels, written in the order in which the tree is traversed by depth-first search. Knowing the arities of the corresponding algebraic functions, we can restore any such DFS-code in O(n) and get back the superposition of functions. On the set of similar string descriptions, it is proposed to search for the string description that will correspond to the optimal model.
  • Authors: Kulunchakov Andrey, Strizhov V.V.

Task 5 (1)

  • Name: Definition of neural network parameters to be optimized.
  • Task: The task of neural network optimization is considered. It is required to divide the model parameters into two groups:
    • a) Model parameters to be optimized
    • b) Model parameters whose optimization has been completed. Further optimization of these parameters will not improve the quality of the model.

It is proposed to consider the optimization of parameters as a stochastic process. Based on the history of the process, we find those parameters whose optimization is no longer required.

  • Data: A selection of handwritten MNIST digits
  • Basic algorithm: Random choice of parameters.
  • References::
    • [74] SGD as a stochastic process.
    • [75] Variational inference in neural networks.
  • Novelty: The resulting algorithm will significantly reduce the computational cost of optimizing neural networks. A possible further development of the method is to obtain estimates for the parameters of the network obtained from the original operations of expansion, compression, adding and removing layers.
  • Authors: Oleg Bakhteev, Strizhov V.V.
Личные инструменты