一级黄色片免费播放|中国黄色视频播放片|日本三级a|可以直接考播黄片影视免费一级毛片

高級(jí)搜索

留言板

尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問(wèn)題, 您可以本頁(yè)添加留言。我們將盡快給您答復(fù)。謝謝您的支持!

姓名
郵箱
手機(jī)號(hào)碼
標(biāo)題
留言?xún)?nèi)容
驗(yàn)證碼

具有聚類(lèi)結(jié)構(gòu)相似性的非參數(shù)貝葉斯字典學(xué)習(xí)算法

董道廣 芮國(guó)勝 田文飚 張洋 劉歌

董道廣, 芮國(guó)勝, 田文飚, 張洋, 劉歌. 具有聚類(lèi)結(jié)構(gòu)相似性的非參數(shù)貝葉斯字典學(xué)習(xí)算法[J]. 電子與信息學(xué)報(bào), 2020, 42(11): 2765-2772. doi: 10.11999/JEIT190496
引用本文: 董道廣, 芮國(guó)勝, 田文飚, 張洋, 劉歌. 具有聚類(lèi)結(jié)構(gòu)相似性的非參數(shù)貝葉斯字典學(xué)習(xí)算法[J]. 電子與信息學(xué)報(bào), 2020, 42(11): 2765-2772. doi: 10.11999/JEIT190496
Daoguang DONG, Guosheng RUI, Wenbiao TIAN, Yang ZHANG, Ge LIU. A Nonparametric Bayesian Dictionary Learning Algorithm with Clustering Structure Similarity[J]. Journal of Electronics & Information Technology, 2020, 42(11): 2765-2772. doi: 10.11999/JEIT190496
Citation: Daoguang DONG, Guosheng RUI, Wenbiao TIAN, Yang ZHANG, Ge LIU. A Nonparametric Bayesian Dictionary Learning Algorithm with Clustering Structure Similarity[J]. Journal of Electronics & Information Technology, 2020, 42(11): 2765-2772. doi: 10.11999/JEIT190496

具有聚類(lèi)結(jié)構(gòu)相似性的非參數(shù)貝葉斯字典學(xué)習(xí)算法

doi: 10.11999/JEIT190496 cstr: 32379.14.JEIT190496
基金項(xiàng)目: 國(guó)家自然科學(xué)基金(41606117, 41476089, 61671016)
詳細(xì)信息
    作者簡(jiǎn)介:

    董道廣:男,1990年生,博士,研究方向?yàn)樨惾~斯統(tǒng)計(jì)學(xué)習(xí)、壓縮感知與大氣波導(dǎo)

    芮國(guó)勝:男,1968年生,博士,教授,博士生導(dǎo)師,研究方向?yàn)楝F(xiàn)代通信理論及信號(hào)處理

    田文飚:男,1987年生,博士,副教授,研究方向?yàn)榇髿獠▽?dǎo)與壓縮感知

    張洋:男,1983年生,博士,講師,研究方向?yàn)榛煦缤ㄐ偶夹g(shù)

    劉歌:女,1991年生,博士,研究方向?yàn)閴嚎s感知與大氣波導(dǎo)

    通訊作者:

    董道廣 sikongyu@yeah.net

  • 中圖分類(lèi)號(hào): TN911.73; TP301.6

A Nonparametric Bayesian Dictionary Learning Algorithm with Clustering Structure Similarity

Funds: The National Natural Science Foundation of China (41606117, 41476089, 61671016)
  • 摘要: 利用圖像結(jié)構(gòu)信息是字典學(xué)習(xí)的難點(diǎn),針對(duì)傳統(tǒng)非參數(shù)貝葉斯算法對(duì)圖像結(jié)構(gòu)信息利用不充分,以及算法運(yùn)行效率低下的問(wèn)題,該文提出一種結(jié)構(gòu)相似性聚類(lèi)beta過(guò)程因子分析(SSC-BPFA)字典學(xué)習(xí)算法。該算法通過(guò)Markov隨機(jī)場(chǎng)和分層Dirichlet過(guò)程實(shí)現(xiàn)對(duì)圖像局部結(jié)構(gòu)相似性和全局聚類(lèi)差異性的兼顧,利用變分貝葉斯推斷完成對(duì)概率模型的高效學(xué)習(xí),在確保算法收斂性的同時(shí)具有聚類(lèi)的自適應(yīng)性。實(shí)驗(yàn)表明,相比目前非參數(shù)貝葉斯字典學(xué)習(xí)方面的主流算法,該文算法在圖像去噪和插值修復(fù)應(yīng)用中具有更高的表示精度、結(jié)構(gòu)相似性測(cè)度和運(yùn)行效率。
  • 圖  1  本文算法的概率圖模型

    圖  2  本文算法字典學(xué)習(xí)過(guò)程中獲得的聚類(lèi)效果

    圖  3  不同算法的圖像去噪效果

    圖  4  不同信噪比條件下的去噪信誤比

    圖  5  不同算法的圖像修復(fù)效果

    圖  6  不同缺失率條件下的平均SER結(jié)果

    圖  7  不同缺失率條件下的平均SSIM結(jié)果

    圖  8  不同缺失率條件下的平均時(shí)間代價(jià)

    表  1  模型中隱變量及其變分參數(shù)的VB推斷更新公式

    隱變量變分分布變分參數(shù)的VB推斷更新公式隱變量更新公式
    ${{q7j3ldu95}_k}$$q\left( {{{q7j3ldu95}_k}} \right) \propto {\rm{Normal}}\left( {{{q7j3ldu95}_k}|{{{\tilde { m}}}_k},{{{\tilde { \Lambda }}}_k}} \right)$${ {\tilde{ \varLambda } }_k} = P{ {{I} }_P} + \displaystyle\sum\nolimits_{i = 1}^N {\dfrac{ { { {\tilde e}_0} } }{ { { {\tilde f}_0} } }\left( {\tilde \mu _{ik}^2 + \tilde \sigma _{ik}^2} \right){ {\tilde \rho }_{ik} }{ {{I} }_P} }$,

    ${ {\tilde { m} }_k} = {\tilde { \varLambda } }_k^{ - 1} \cdot \left( {\displaystyle\sum\nolimits_{i = 1}^N {\dfrac{ { { {\tilde e}_0} } }{ { { {\tilde f}_0} } }{ {\tilde \mu }_{ik} }{ {\tilde \rho }_{ik} }{{x} }_i^{ - k} } } \right)$
    ${{q7j3ldu95}_k} = {{\tilde { m}}_k}$
    ${s_{ik}}$$q\left( {{s_{ik}}} \right) \propto {\rm{Normal}}\left( {{s_{ik}}|{{\tilde \mu }_{ik}},\tilde \sigma _{ik}^2} \right)$$\tilde \sigma _{ik}^2 = {\left\{ {\dfrac{ { { {\tilde e}_0} } }{ { { {\tilde f}_0} } }{ {\tilde \rho }_{ik} }\left[ { {\tilde { m} }_k^{\rm{T} }{ { {\tilde { m} } }_k} + {\rm{tr(} }{\tilde { \varLambda } }_k^{ - 1}{\rm{)} } } \right] + \dfrac{ { { {\tilde c}_0} } }{ { { {\tilde f}_0} } } } \right\}^{ - 1} }$,

    ${\tilde \mu _{ik}} = \tilde \sigma _{ik}^2 \cdot \left( {\dfrac{{{{\tilde e}_0}}}{{{{\tilde f}_0}}}{{\tilde \rho }_{ik}}{\tilde { m}}_k^{\rm{T}}{{x}}_i^{ - k}} \right)$
    ${s_{ik}} = {\tilde \mu _{ik}}$
    ${z_{ik}}$$q\left( {{z_{ik}}} \right) \propto {\rm{Bernoulli}}\left( {{z_{ik}}|{{\tilde \rho }_{ik}}} \right)$${\tilde \rho _{ik}} = \dfrac{{{\rho _{ik,1}}}}{{{\rho _{ik,1}} + {\rho _{ik,0}}}}$,

    ${\rho _{ik,0}} = \exp \left\{ \displaystyle\sum\nolimits_{l = 1}^L {{{\tilde \xi }_{il}} \cdot [\psi ({{\tilde b}_{lk}}) - \psi ({{\tilde a}_{lk}} + {{\tilde b}_{lk}})]} \right\} $$\begin{array}{l} {\rho _{ik,1} } = \exp \Bigr\{ - \dfrac{1}{2}\dfrac{ { { {\tilde e}_0} } }{ { { {\tilde f}_0} } }(\tilde \mu _{ik}^2 + \tilde \sigma _{ik}^2)({\tilde { m} }_k^{\rm{T} }{ { {\tilde { m} } }_k} + {\rm{tr} }({\tilde { \varLambda } }_k^{ - 1})) \\\qquad\quad\ + \dfrac{ { { {\tilde e}_0} } }{ { { {\tilde f}_0} } }{ {\tilde \mu }_{ik} }{\tilde { m} }_k^{\rm{T} }{{x} }_i^{ - k} + \displaystyle\sum\nolimits_{l = 1}^L { { {\tilde \xi }_{il} } \cdot [\psi ({ {\tilde a}_{lk} }) - \psi ({ {\tilde a}_{lk} } + { {\tilde b}_{lk} })]} \Bigr\} \end{array}$
    ${z_{ik}} = {\tilde \rho _{ik}}$
    $\pi _{lk}^ * $$q\left( {\pi _{lk}^ * } \right) \propto {\rm{Beta}}\left( {\pi _{lk}^ * |{{\tilde a}_{lk}},{{\tilde b}_{lk}}} \right)$${\tilde a_{lk} } = { { {a_0} } / K} + \displaystyle\sum\nolimits_{i = 1}^N { { {\tilde \rho }_{ik} }{ {\tilde \xi }_{il} } }$, ${\tilde b_{lk}} = {{{b_0}(K - 1)} / K} + \displaystyle\sum\nolimits_{i = 1}^N {(1 - {{\tilde \rho }_{ik}}){{\tilde \xi }_{il}}} $$\pi _{lk}^ * = \dfrac{{{{\tilde a}_{lk}}}}{{{{\tilde a}_{lk}} + {{\tilde b}_{lk}}}}$
    ${t_i}$$q\left( { {t_i} } \right) \propto {\rm{Multi} }\left( { { {\tilde \xi }_{i1} },{ {\tilde \xi }_{i2} },···,{ {\tilde \xi }_{iL} } } \right)$${\tilde \xi _{il}} = {{{\xi _{il}}} / {\displaystyle\sum\nolimits_{l' = 1}^L {{\xi _{il'}}} }}$,

    $\begin{array}{l} {\xi _{il} } = \exp \Bigr\{ \displaystyle\sum\nolimits_{k = 1}^K { { {\tilde \rho }_{ik} }[\psi ({ {\tilde a}_{lk} }) - \psi ({ {\tilde a}_{lk} } + { {\tilde b}_{lk} })]} \\ \qquad\quad + \displaystyle\sum\nolimits_{k = 1}^K {(1 - { {\tilde \rho }_{ik} })[\psi ({ {\tilde b}_{lk} }) - \psi \left({ {\tilde a}_{lk} } + { {\tilde b}_{lk} }\right)]} \\ \qquad\quad + \psi ({ {\tilde \zeta }_{il} }) - \psi \left(\displaystyle\sum\nolimits_{l' = 1}^L { { {\tilde \zeta }_{il'} } } \right)\Bigr\} \\ \end{array}$
    ${t_i} = \mathop {\arg \max }\limits_l \left\{ {{{\tilde \xi }_{il}},_{}^{}\forall l \in [1,L]} \right\}$
    ${\beta _{il}}$$q\left( {\{ {\beta _{il} }\} _{l = 1}^L} \right) \propto {\rm{Diri} }\left( { { {\tilde \lambda }_{i1} },{ {\tilde \lambda }_{i2} },···,{ {\tilde \lambda }_{iL} } } \right)$${\tilde \lambda _{il}} = {\tilde \xi _{il}} + {\alpha _0}{G_{il}}$${\beta _{il}} = {{{{\tilde \lambda }_{il}}} / {\displaystyle\sum\nolimits_{l' = 1}^L {{{\tilde \lambda }_{il'}}} }}$
    ${\gamma _s}$$q\left( {{\gamma _s}} \right) \propto {\rm{Gamma}}\left( {{\gamma _s}|{{\tilde c}_0},{{\tilde d}_0}} \right)$${\tilde c_0} = {c_0} + \dfrac{1}{2}NK$, ${\tilde d_0} = {d_0} + \dfrac{1}{2}\displaystyle\sum\nolimits_{i = 1}^N {\displaystyle\sum\nolimits_{k = 1}^K {\left( {\tilde \mu _{ik}^2 + \tilde \sigma _{ik}^2} \right)} } $${\gamma _s} = {{{{\tilde c}_0}} / {{{\tilde d}_0}}}$
    ${\gamma _\varepsilon }$$q\left( {{\gamma _\varepsilon }} \right) \propto {\rm{Gamma}}\left( {{\gamma _\varepsilon }|{{\tilde e}_0},{{\tilde f}_0}} \right)$${\tilde e_0} = {e_0} + \dfrac{1}{2}NP$, ${\tilde f_0} = {f_0} + \dfrac{1}{2}\displaystyle\sum\nolimits_{i = 1}^N {\left\| {{{{x}}_i} - {\hat{ D}}\left( {{{{\hat{ s}}}_i} \odot {{{\hat{ z}}}_i}} \right)} \right\|} $${\gamma _\varepsilon } = {{{{\tilde e}_0}} / {{{\tilde f}_0}}}$
     算法1 SSC-BPFA算法
     輸入:訓(xùn)練數(shù)據(jù)樣本集$\left\{ {{{{x}}_i}} \right\}_{i = 1}^N$
     輸出:字典${{D}}$、權(quán)重$\left\{ {{{{s}}_i}} \right\}_{i = 1}^N$、稀疏模式指示向量$\left\{ {{{{z}}_i}} \right\}_{i = 1}^N$及聚類(lèi)標(biāo)簽$\left\{ {{t_i}} \right\}_{i = 1}^N$
     步驟 1 設(shè)置收斂閾值$\tau $和迭代次數(shù)上限${\rm{it}}{{\rm{r}}_{\max }}$,初始化原子數(shù)目$K$與聚類(lèi)數(shù)目$L$,通過(guò)K-均值算法對(duì)數(shù)據(jù)樣本進(jìn)行初始聚類(lèi);
     步驟 2 按照式(1)—式(15)完成NPB-DL的概率建模;
     步驟 3 初始化隱變量集${{\varTheta } }$和變分參數(shù)集${{{ H}}}$,計(jì)算${\hat{ X}} = {{D}}\left( {{{S}} \odot {{Z}}} \right)$,令迭代索引${\rm{itr}}$=1;
     步驟 4 按照表1的VB推斷公式對(duì)${{\varTheta } }$和${{{ H}}}$進(jìn)行更新,計(jì)算${{\hat{ X}}^{{\rm{new}}}} = {{D}}\left( {{{S}} \odot {{Z}}} \right)$;
     步驟 5 令${\rm{itr}}$值增加1,刪除${{D}}$中未使用的原子并更新$K$的值,計(jì)算$r = {{\left\| {{{{\hat{ X}}}^{{\rm{new}}}} - {\hat{ X}}} \right\|_{\rm{F}}^2} / {\left\| {{\hat{ X}}} \right\|_{\rm{F}}^2}}$,若$r < \tau $或${\rm{itr}} \ge {\rm{it}}{{\rm{r}}_{\max }}$,刪除所含樣本
     占全部樣本數(shù)量比例低于${10^{ - 3}}$的聚類(lèi),將被刪聚類(lèi)內(nèi)的樣本分配到剩余聚類(lèi)中${\tilde \xi _{il}}$最大的那個(gè)聚類(lèi)中,進(jìn)入步驟6,否則跳回步驟4;
     步驟 6 固定${t_i}$和${\beta _{il}}$及其變分參數(shù)的估計(jì)結(jié)果,將${{q7j3ldu95}_k}$, ${s_{ik}}$, ${z_{ik}}$, $\pi _{lk}^ * $, ${\gamma _s}$和${\gamma _\varepsilon }$這6個(gè)隱變量及其對(duì)應(yīng)的變分參數(shù)繼續(xù)進(jìn)行迭代優(yōu)化更新,直至
     重新達(dá)到收斂.
    下載: 導(dǎo)出CSV
  • XUAN Junyu, LU Jie, ZHANG Guangquan, et al. Doubly nonparametric sparse nonnegative matrix factorization based on dependent Indian buffet processes[J]. IEEE Transactions on Neural Networks and Learning Systems, 2018, 29(5): 1835–1849. doi: 10.1109/TNNLS.2017.2676817
    LI Shaoyang, TAO Xiaoming, and LU Jianhua. Variational Bayesian inference for nonparametric signal compressive sensing on structured manifolds[C]. 2017 IEEE International Conference on Communications, Paris, France, 2017. doi: 10.1109/ICC.2017.7996389.
    DANG H P and CHAINAIS P. Indian buffet process dictionary learning: Algorithms and applications to image processing[J]. International Journal of Approximate Reasoning, 2017, 83: 1–20. doi: 10.1016/j.ijar.2016.12.010
    SERRA J G, TESTA M, MOLINA R, et al. Bayesian K-SVD using fast variational inference[J]. IEEE Transactions on Image Processing, 2017, 26(7): 3344–3359. doi: 10.1109/TIP.2017.2681436
    NGUYEN V, PHUNG D, BUI H, et al. Discriminative Bayesian nonparametric clustering[C]. The 26th International Joint Conference on Artificial Intelligence, Melbourne, Australia, 2017. doi: 10.24963/ijcai.2017/355.
    HUYNH V and PHUNG D. Streaming clustering with Bayesian nonparametric models[J]. Neurocomputing, 2017, 258: 52–62. doi: 10.1016/j.neucom.2017.02.078
    AHARON M, ELAD M, and BRUCKSTEIN A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation[J]. IEEE Transactions on Signal Processing, 2006, 54(11): 4311–4322. doi: 10.1109/TSP.2006.881199
    ELAD M and AHARON M. Image denoising via sparse and redundant representations over learned dictionaries[J]. IEEE Transactions on Image Processing, 2006, 15(2): 3736–3745. doi: 10.1109/TIP.2006.881969
    ZHOU Mingyuan, CHEN Haojun, PAISLEY J, et al. Nonparametric Bayesian dictionary learning for analysis of noisy and incomplete images[J]. IEEE Transactions on Image Processing, 2012, 21(1): 130–144. doi: 10.1109/TIP.2011.2160072
    ZHOU Mingyuan, PAISLEY J, and CARIN L. Nonparametric learning of dictionaries for sparse representation of sensor signals[C]. The 3rd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, Aruba, Netherlands, 2009. doi: 10.1109/CAMSAP.2009.5413290.
    PAISLEY J, ZHOU Mingyuan, SAPIRO G, et al. Nonparametric image interpolation and dictionary learning using spatially-dependent Dirichlet and beta process priors[C]. 2010 IEEE International Conference on Image Processing, Hong Kong, China, 2010. doi: 10.1109/ICIP.2010.5653350.
    JU Fujiao, SUN Yanfeng, GAO Junbin, et al. Nonparametric tensor dictionary learning with beta process priors[J]. Neurocomputing, 2016, 218: 120–130. doi: 10.1016/j.neucom.2016.08.064
    KNOWLES D and GHAHRAMANI Z. Infinite sparse factor analysis and infinite independent components analysis[C]. The 7th International Conference on Independent Component Analysis and Signal Separation, London, UK, 2007: 381–388. doi: 10.1007/978-3-540-74494-8_48.
    ZHANG Linlin, GUINDANI M, VERSACE F, et al. A spatio-temporal nonparametric Bayesian variable selection model of fMRI data for clustering correlated time courses[J]. NeuroImage, 2014, 95: 162–175. doi: 10.1016/j.neuroimage.2014.03.024
    AKHTAR N and MIAN A. Nonparametric coupled Bayesian dictionary and classifier learning for hyperspectral classification[J]. IEEE Transactions on Neural Networks and Learning Systems, 2018, 29(9): 4038–4050. doi: 10.1109/TNNLS.2017.2742528
    POLATKAN G, ZHOU Mingyuan, CARIN L, et al. A Bayesian nonparametric approach to image super-resolution[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(2): 346–358. doi: 10.1109/TPAMI.2014.2321404
    董道廣, 芮國(guó)勝, 田文飚, 等. 基于結(jié)構(gòu)相似性的非參數(shù)貝葉斯字典學(xué)習(xí)算法[J]. 通信學(xué)報(bào), 2019, 40(1): 43–50. doi: 10.11959/j.issn.1000-436x.2019015

    DONG Daoguang, RUI Guosheng, TIAN Wenbiao, et al. Nonparametric Bayesian dictionary learning algorithm based on structural similarity[J]. Journal on Communications, 2019, 40(1): 43–50. doi: 10.11959/j.issn.1000-436x.2019015
    BISHOP C M. Pattern Recognition and Machine Learning (Information Science and Statistics)[M]. New York: Springer, 2006: 461–522.
  • 加載中
圖(8) / 表(1)
計(jì)量
  • 文章訪問(wèn)數(shù):  1484
  • HTML全文瀏覽量:  509
  • PDF下載量:  75
  • 被引次數(shù): 0
出版歷程
  • 收稿日期:  2019-07-03
  • 修回日期:  2020-02-28
  • 網(wǎng)絡(luò)出版日期:  2020-09-01
  • 刊出日期:  2020-11-16

目錄

    /

    返回文章
    返回