Follow
Stas Tiomkin
Stas Tiomkin
Assistant Professor, CS Dept., Whitacre College of Engineering, Texas Tech U
Verified email at sjsu.edu - Homepage
Title
Cited by
Cited by
Year
SYSTEM AND METHOD FOR EXTRACTING AND USING PROSODY FEATURES
DW Stas Tiomkin
US Patent 20170103748A1, 2017
82*2017
A hybrid text-to-speech system that combines concatenative and statistical synthesis units
S Tiomkin, D Malah, S Shechtman, Z Kons
IEEE transactions on audio, speech, and language processing 19 (5), 1278-1288, 2010
722010
Ave: Assistance via empowerment
Y Du, S Tiomkin, E Kiciman, D Polani, P Abbeel, A Dragan
Advances in Neural Information Processing Systems 33, 4560-4571, 2020
402020
A unified bellman equation for causal information and value in markov decision processes
S Tiomkin, N Tishby
arXiv preprint arXiv:1703.01585, 2017
382017
Dynamics generalization via information bottleneck in deep reinforcement learning
X Lu, K Lee, P Abbeel, S Tiomkin
arXiv preprint arXiv:2008.00614, 2020
282020
Control capacity of partially observable dynamic systems in continuous time
S Tiomkin, D Polani, N Tishby
arXiv preprint arXiv:1701.04984, 2017
192017
Efficient Empowerment Estimation for Unsupervised Stabilization
ST Ruihan Zhao, Kevin Lu, Pieter Abbeel
International Conference on Learning Representations, (ICLR), 2021, 2021
13*2021
Past-future Information Bottleneck for linear feedback systems
N Amir, S Tiomkin, N Tishby
2015 54th IEEE Conference on Decision and Control (CDC), 5737-5742, 2015
102015
Statistical text-to-speech synthesis based on segment-wise representation with a norm constraint
S Tiomkin, D Malah, S Shechtman
IEEE Transactions on Audio, Speech, and Language Processing 18 (5), 1077-1082, 2010
102010
Predictive coding for boosting deep reinforcement learning with sparse rewards
X Lu, S Tiomkin, P Abbeel
arXiv preprint arXiv:1912.13414, 2019
72019
Learning efficient representation for intrinsic motivation
R Zhao, S Tiomkin, P Abbeel
arXiv preprint arXiv:1912.02624, 2019
62019
Utilizing Prior Solutions for Reward Shaping and Composition in Entropy-Regularized Reinforcement Learning
J Adamczyk, A Arriojas, S Tiomkin, RV Kulkarni
Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI-23), 2022
52022
Preventing imitation learning with adversarial policy ensembles
A Zhan, S Tiomkin, P Abbeel
arXiv preprint arXiv:2002.01059, 2020
52020
Cognitive workload and vocabulary sparseness: theory and practice.
RM Hecht, A Bar-Hillel, S Tiomkin, H Levi, O Tsimhoni, N Tishby
INTERSPEECH, 3394-3398, 2015
42015
Statistical text-to-speech synthesis with improved dynamics.
S Tiomkin, D Malah
INTERSPEECH, 1841-1844, 2008
42008
Entropy regularized reinforcement learning using large deviation theory
A Arriojas, J Adamczyk, S Tiomkin, RV Kulkarni
Physical Review Research 5 (2), 023085, 2023
32023
A segment-wise hybrid approach for improved quality text-to-speech synthesis
S Tiomkin
Technion-Israel Institute of Technology, Faculty of Electrical Engineering, 2009
32009
Bounding the optimal value function in compositional reinforcement learning
J Adamczyk, V Makarenko, A Arriojas, S Tiomkin, RV Kulkarni
Uncertainty in Artificial Intelligence, 22-32, 2023
22023
Dimensionality Reduction of Dynamics on Lie Groups via Structure-Aware Canonical Correlation Analysis
W Chung, D Polani, S Tiomkin
2024 American Control Conference (ACC), 439-446, 2024
12024
Multi-Resolution Diffusion for Privacy-Sensitive Recommender Systems
D Lilienthal, P Mello, M Eirinaki, S Tiomkin
IEEE Access, volume 11, 10.1109/ACCESS.2024.3388299, 2024
12024
The system can't perform the operation now. Try again later.
Articles 1–20