Skip to content

Commit 77d9c7c

Browse files
committed
change style
1 parent beb0c70 commit 77d9c7c

File tree

4 files changed

+37
-19
lines changed

4 files changed

+37
-19
lines changed

blog/AppS/DeepFM.md

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,9 @@
22

33
In the field of recommendation systems, efficiently combining low-order and high-order feature interactions to improve prediction accuracy has always been a key challenge. The DeepFM model offers a solution that combines memory capacity and generalization ability by integrating Factorization Machines (FM) with Deep Neural Networks (DNN). This article will introduce the application and effectiveness of DeepFM in the AppS business.
44

5-
<img src="../../static/images/deepfm.svg" title="" alt="" width="522" data-align="center">
5+
<center>
6+
<img src="../../static/images/deepfm.svg" title="" alt="" width="400" data-align="center">
7+
</center>
68

79
## Introduction
810

@@ -113,12 +115,14 @@ Building on our experience with FM model training, the DeepFM model excels in co
113115

114116
In the "Guess You Like" module, deploying the DeepFM model led to a **4.66%** increase in average distribution per user. This result indicates that DeepFM significantly enhances the quality of personalized recommendations for users.
115117

118+
<center>
116119
<img src="../../static/images/DeepFM-AB.png" title="" alt="" width="522" data-align="center">
120+
</center>
117121

118122
## Further Reading
119123

120-
- [A Factorization-Machine based Neural Network for CTR Prediction - arXiv](https://arxiv.org/abs/1703.04247)
124+
[A Factorization-Machine based Neural Network for CTR Prediction - arXiv](https://arxiv.org/abs/1703.04247)
121125

122-
- [Deep Factorization Machines — Dive into Deep Learning](https://d2l.ai/chapter_recommender-systems/deepfm.html)
126+
[Deep Factorization Machines — Dive into Deep Learning](https://d2l.ai/chapter_recommender-systems/deepfm.html)
123127

124-
- [DeepFM for recommendation systems explained with codes](https://medium.com/data-science-in-your-pocket/deepfm-for-recommendation-systems-explained-with-codes-c200063990f7)
128+
[DeepFM for recommendation systems explained with codes](https://medium.com/data-science-in-your-pocket/deepfm-for-recommendation-systems-explained-with-codes-c200063990f7)

blog/AppS/ESMM.md

Lines changed: 11 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,9 @@
22

33
In modern recommendation systems, particularly within the AppS business environment, predicting user behaviors such as Click-Through Rate (CTR) and Conversion Rate (CVR) is crucial for enhancing user satisfaction and driving business growth. The ESMM model, with its unique architecture and efficient multi-task learning capability, offers an outstanding solution for the AppS business.
44

5-
<img title="" src="../../static/images/ESMM-origin.webp" alt="" width="522" data-align="center">
5+
<center>
6+
<img title="" src="../../static/images/ESMM-origin.webp" alt="" width="400" data-align="center">
7+
</center>
68

79
## Introduction
810

@@ -48,7 +50,9 @@ The ESMM model shares many structural similarities with the traditional MMOE mod
4850

4951
### Key Components of the ESMM Model
5052

51-
<img title="" src="../../static/images/ESMM.webp" alt="" width="522" data-align="center">
53+
<center>
54+
<img title="" src="../../static/images/ESMM.webp" alt="" width="400" data-align="center">
55+
</center>
5256

5357
#### Two Expert Networks
5458

@@ -62,12 +66,14 @@ In addition to expert networks, ESMM also employs two gating mechanisms to contr
6266

6367
In practical applications within the AppS business, the ESMM model has demonstrated significant results through A/B testing. In the "Guess You Like" module, the ESMM model successfully achieved a 6.45% increase in average distribution per user.
6468

69+
<center>
6570
<img title="" src="../../static/images/ESMM-AB.png" alt="" width="522" data-align="center">
71+
</center>
6672

6773
## Further Reading
6874

69-
- [Entire Space Multi-Task Model: An Effective Approach for Estimating ... - arXiv](https://arxiv.org/abs/1804.07931)
75+
[Entire Space Multi-Task Model: An Effective Approach for Estimating ... - arXiv](https://arxiv.org/abs/1804.07931)
7076

71-
- [ESMM &mdash; easy_rec 0.8.5 documentation](https://easyrec.readthedocs.io/en/latest/models/esmm.html)
77+
[ESMM &mdash; easy_rec 0.8.5 documentation](https://easyrec.readthedocs.io/en/latest/models/esmm.html)
7278

73-
- [GitHub - dai08srhg/ESMM: PyTorch implementation of Entire Space Multitask Model (ESMM)](https://github.com/dai08srhg/ESMM)
79+
[GitHub - dai08srhg/ESMM: PyTorch implementation of Entire Space Multitask Model (ESMM)](https://github.com/dai08srhg/ESMM)

blog/AppS/FM.md

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,9 @@
44

55
Factorization Machines (FM) are powerful machine learning models, especially widely used in recommendation systems and advertising click-through rate prediction. FM models can effectively capture the cross information between features and are highly efficient and easy to implement in engineering.
66

7-
<img title="" src="../../static/images/FM.jpg" alt="" width="522" data-align="center">
7+
<center>
8+
<img title="" src="../../static/images/FM.jpg" alt="" width="400" data-align="center">
9+
</center>
810

911
### 1. Advantages of FM over Linear Regression (LR)
1012

@@ -106,10 +108,13 @@ To resolve this issue, we differentiated the reported data on the first screen,
106108

107109
The effect of online weighting was verified through AB testing. Specific AB test screenshots will be presented here, further proving the effectiveness of our optimization strategies in practical applications. In the **Guess You Like** section on the homepage, the average distribution per person increased by **14.8%**.
108110

111+
<center>
109112
<img title="" src="../../static/images/FM-AB.webp" alt="" width="522" data-align="center">
110113

114+
</center>
115+
111116
## Further Reading
112117

113-
- [Factorization Machine models in PyTorch - GitHub](https://github.com/rixwew/pytorch-fm)
118+
[Factorization Machine models in PyTorch - GitHub](https://github.com/rixwew/pytorch-fm)
114119

115-
- [Factorization Machines](https://d2l.ai/chapter_recommender-systems/fm.html)
120+
[Factorization Machines](https://d2l.ai/chapter_recommender-systems/fm.html)

blog/AppS/MMOE.md

Lines changed: 10 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
# Application of the MMOE Model in AppS
22

33
In the AppS business, recommendation systems need not only to improve user Click-Through Rate (CTR) but also to enhance Conversion Rate (CVR) to achieve comprehensive user engagement and business growth. The Multi-gate Mixture-of-Experts (MMOE) model offers an efficient solution by simultaneously optimizing multiple objectives to meet these business needs.
4-
5-
<img title="" src="../../static/images/MMOE-origin.webp" alt="" data-align="center" width="522">
6-
4+
<center>
5+
<img title="" src="../../static/images/MMOE-origin.webp" alt="" data-align="center" width="400">
6+
</center>
77
## Introduction
88

99
In the field of recommendation systems and advertising, models often need to optimize multiple objectives simultaneously, such as Click-Through Rate (CTR) and Conversion Rate (CVR). The Multi-gate Mixture-of-Experts (MMOE) model provides an effective solution by achieving better goal synergy optimization within a multi-task learning framework.
@@ -67,8 +67,9 @@ During model training, we use click and download behaviors as task labels and de
6767
- The loss weight for PCTCVR is set to 0.05, ensuring that download behavior receives appropriate attention.
6868

6969
This weight allocation ensures that CTR is the primary optimization direction while also considering the CVR objective.
70-
71-
<img title="" src="../../static/images/mmoe.webp" alt="" data-align="center" width="445">
70+
<center>
71+
<img title="" src="../../static/images/mmoe.webp" alt="" data-align="center" width="400">
72+
</center>
7273

7374
#### 5. Online Inference and Ranking
7475

@@ -78,15 +79,17 @@ During online inference, we apply the same weights to PCTR and PCVR and rank the
7879

7980
By applying the MMOE model in the "Guess You Like" module, our AB testing results showed a **13.1%** increase in average distribution per user. This significant improvement validates the effectiveness of the MMOE model in simultaneously optimizing CTR and CVR, bringing higher user engagement and conversion rates to the AppS business.
8081

82+
<center>
8183
<img title="" src="../../static/images/MMOE-AB.png" alt="" width="522" data-align="center">
84+
</center>
8285

8386
### Conclusion
8487

8588
The MMOE model achieves comprehensive optimization of CTR and CVR in the AppS business through its flexible expert and gating mechanisms. Combined with the FM cross strategy, MMOE not only enhances the predictive ability of recommendation systems but also improves multi-objective synergy optimization of user behavior, providing strong support for business development.
8689

8790
## Further Reading
8891

89-
- [Modeling Task Relationships in Multi-task Learning with
92+
[Modeling Task Relationships in Multi-task Learning with
9093
Multi-gate Mixture-of-Experts](https://dl.acm.org/doi/pdf/10.1145/3219819.3220007)
9194

92-
- [The Annotated Multi-Task Ranker: An MMoE Code Example](https://www.yuan-meng.com/posts/mtml/)
95+
[The Annotated Multi-Task Ranker: An MMoE Code Example](https://www.yuan-meng.com/posts/mtml/)

0 commit comments

Comments
 (0)