关注
Xiaojie Jin, 靳潇杰
Xiaojie Jin, 靳潇杰
Bytedance Research, USA
在 bytedance.com 的电子邮件经过验证
标题
引用次数
引用次数
年份
Dual path networks
Y Chen, J Li, H Xiao, X Jin, S Yan, J Feng
NIPS, 4467-4475, 2017
10352017
Deepvit: Towards deeper vision transformer
D Zhou, B Kang, X Jin, L Yang, X Lian, Z Jiang, Q Hou, J Feng
arXiv preprint arXiv:2103.11886, 2021
6272021
Conflict-averse gradient descent for multi-task learning
B Liu, X Liu, X Jin, P Stone, Q Liu
NeurIPS, 2021, 34, 18878-18890, 2021
2832021
Deep learning with s-shaped rectified linear activation units
X Jin, C Xu, J Feng, Y Wei, J Xiong, S Yan
AAAI, 2016, 2016
2752016
Deep self-taught learning for weakly supervised object localization
Z Jie, Y Wei, X Jin, J Feng, W Liu
CVPR, 2017, 1377-1385, 2017
2462017
All tokens matter: Token labeling for training better vision transformers
ZH Jiang, Q Hou, L Yuan, D Zhou, Y Shi, X Jin, A Wang, J Feng
NeurIPS, 2021, 34, 18590-18602, 2021
2092021
Tree-structured reinforcement learning for sequential object localization
Z Jie, X Liang, J Feng, X Jin, W Lu, S Yan
NIPS, 2016, 127-135, 2016
1572016
Video scene parsing with predictive feature learning
X Jin, X Li, H Xiao, X Shen, Z Lin, J Yang, Y Chen, J Dong, L Liu, Z Jie, ...
ICCV, 2017, 5580-5588, 2017
1502017
Atomnas: Fine-grained end-to-end neural architecture search
J Mei, Y Li, X Lian, X Jin, L Yang, A Yuille, J Yang
ICLR 2020, 2019
1472019
Contrastive masked autoencoders are stronger vision learners
Z Huang, X Jin, C Lu, Q Hou, MM Cheng, D Fu, X Shen, J Feng
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023
1282023
Training skinny deep neural networks with iterative hard thresholding methods
X Jin, X Yuan, J Feng, S Yan
arXiv preprint arXiv:1607.05423, 2016
912016
Human-centric spatio-temporal video grounding with visual transformers
Z Tang, Y Liao, S Liu, G Li, X Jin, H Jiang, Q Yu, D Xu
IEEE Transactions on Circuits and Systems for Video Technology 32 (12), 8238 …, 2021
852021
Predicting scene parsing and motion dynamics in the future
X Jin, H Xiao, X Shen, J Yang, Z Lin, Y Chen, Z Jie, J Feng, S Yan
NIPS, 2017, 6915-6924, 2017
802017
Refiner: Refining self-attention for vision transformers
D Zhou, Y Shi, B Kang, W Yu, Z Jiang, Y Li, X Jin, Q Hou, J Feng
arXiv preprint arXiv:2106.03714, 2021
712021
Neural Architecture Search for Lightweight Non-Local Networks
Y Li, X Jin, J Mei, X Lian, L Yang, C Xie, Q Yu, Y Zhou, S Bai, AL Yuille
CVPR, 10297-10306, 2020
692020
HR-NAS: Searching Efficient High-Resolution Neural Architectures with Lightweight Transformers
M Ding, X Lian, L Yang, P Wang, X Jin, Z Lu, P Luo
CVPR, 2021, 2982-2992, 2021
632021
Token labeling: Training a 85.5% top-1 accuracy vision transformer with 56m parameters on imagenet
Z Jiang, Q Hou, L Yuan, D Zhou, X Jin, A Wang, J Feng
arXiv preprint arXiv:2104.10858 3 (6), 7, 2021
552021
Training group orthogonal neural networks with privileged information
Y Chen, X Jin, J Feng, S Yan
arXiv preprint arXiv:1701.06772, 2017
522017
Sharing Residual Units Through Collective Tensor Factorization To Improve Deep Neural Networks.
Y Chen, X Jin, B Kang, J Feng, S Yan
IJCAI, 635-641, 2018
462018
Rc-darts: Resource constrained differentiable architecture search
X Jin, J Wang, J Slocum, MH Yang, S Dai, S Yan, J Feng
arXiv preprint arXiv:1912.12814, 2019
442019
系统目前无法执行此操作,请稍后再试。
文章 1–20