{"id":1040,"date":"2018-06-01T18:55:22","date_gmt":"2018-06-01T18:55:22","guid":{"rendered":"http:\/\/www.codeastar.com\/?p=1040"},"modified":"2018-06-01T18:55:22","modified_gmt":"2018-06-01T18:55:22","slug":"lgb-winning-gradient-boosting-model","status":"publish","type":"post","link":"https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/","title":{"rendered":"LGB, the winning Gradient Boosting model"},"content":{"rendered":"

Last time, we tried the Kaggle’s TalkingData Click Fraud Detection challenge<\/a>. And we used limited resources to handle a 200 million records sized<\/a>\u00a0dataset. Although we can make our classification with Random Forest model<\/a>, we still want a better scoring result.\u00a0 Inside the Click Fraud Detection challenge’s leaderboard, I find that most of the high scoring outputs are came from LightGBM<\/a> (Light Gradient Boosting Machine, let’s call it LGB in following post :]] ).\u00a0Our hunger for knowledge should never stop. Let’s find out why the LGB can win over other models.<\/p>\n

<\/p>\n

LGB word by word explanation<\/h3>\n

LGB stands for Light Gradient Boosting<\/em>. Let’s we start with the first word, “Light”. LGB developing team claims that LGB is fast in model training and low in memory usage. Like every developer in the planet, his\/her product is always the best in the world. Then who LGB developers are? Microsoft. If you are as big as Microsoft, well, you do have the right to say something like that. (What about Internet Explorer, Zune, Windows Mobile and MSN messenger?<\/del>)<\/p>\n

Microsoft’s LGB team provides a list of comparisons<\/a> on how well LGB can out-perform XGB (eXtreme Gradient Boosting). Which XGB is another top winning machine learning model in current days. From the information provided by the LGB team, LGB is around 100% faster than XGB and uses only 25% of XGB’s memory, on same data science challenges.<\/p>\n

Then we go for another word, “Gradient”. It is a changing process, starting from an initial status to a complete status. So, what are we going to change? We can find the answer on the B of LGB.<\/p>\n

“Boosting”. Boosting is a step to make something great again (does it sound familiar?). In machine learning, we use boosting to make weak learners great again. Then we have another question, what are “weak learners” there? Weak learners are models with\u00a0high bias and low variance. (Remember the bartender in “Do you have a dog?<\/a>” comic?)<\/p>\n

Gradient Boosting<\/h3>\n

The term “Gradient Boosting” is a way to make weak learners becoming a good model. We start working from our first weak learner model and get the first round of predictions. Then we have our first error residuals (e1) by subtracting prediction values (y1) from actual values (y0). i.e. e1 = y0 – y1<\/em><\/p>\n

Now we go for our next round of predictions, this time we fit the model with the residuals (e1) and get the a new set of prediction values (ye1). Our final prediction for this round (y2) will be previous prediction (y1) plus prediction using the residuals (ye1). i.e. y2 = y1 + ye1<\/em><\/p>\n

Then our new residual (e2) will be actual values (y) subtracting round 2 prediction (y2), i.e. e2 = y0 – y2<\/em><\/p>\n

We use the second residual (e2) to fit our third model and repeat the above steps. We stop when our model returns close to 0 residual after certain rounds of training. Then we are making a good model step by step from a weak model.<\/p>\n

LGB and XGB in action<\/h3>\n

Microsoft have mentioned how LGB is superior than XGB on their GitHub page. Well, you know, I am a long time Microsoft’s products user, I know I have to find out the truth myself.<\/p>\n

We use the Click Fraud Detection dataset in a Kaggle 17 GB ram kernel for our own LGB and XGB comparison. First, we define our LGB and XGB models with following settings:<\/p>\n

import lightgbm as lgb\r\nlgb_model = lgb.LGBMClassifier(learning_rate = 0.1, \r\n                              num_leaves = 65,  \r\n                              n_estimators = 600)\r\n\r\nimport xgboost as xgb\r\nxgb_model = xgb.XGBClassifier(learning_rate=0.1,\r\n                            max_depth = 6, \r\n                            n_estimators = 600)  \r\n<\/pre>\n

Since the XGB model uses a depth-wise algorithm while LGB model uses a leaf-wise algorithm, we set the XGB model with max_depth = 6<\/em> to compare with LGB model with num_leaves = 65 (65 = 2 ^ 6 + 1)<\/em>.<\/p>\n

Here come the results:<\/p>\n\n\n\n\n\t\n\n\t\n\t\n\t\n\t
# of Record<\/th># of Feature<\/th>Run time for XGB<\/th>Run time for LGB <\/th>AUC score for XGB<\/th>AUC score for LGB<\/th>RAM usage for XGB<\/th>RAM usage for LGB<\/th>\n<\/tr>\n<\/thead>\n
17,000,000<\/td>9<\/td>7371.04 seconds<\/td>637.33 seconds<\/td>0.96469330<\/td>0.95870608<\/td>3.20 GB<\/td>0.32 GB<\/td>\n<\/tr>\n
25,000,000<\/td>9<\/td>10705.20 seconds<\/td>715.79 seconds<\/td>0.96679944<\/td>0.96126930<\/td>4.58 GB<\/td>0.34 GB<\/td>\n<\/tr>\n
30,000,000<\/td>9<\/td>12604.31 seconds<\/td>914.96 seconds<\/td>0.96696078<\/td>0.96135648<\/td>5.49 GB<\/td>0.35 GB<\/td>\n<\/tr>\n
17,000,000<\/td>17<\/td>12668.08 seconds<\/td>941.83 seconds<\/td>0.96533822<\/td>0.95707770<\/td>5.37 GB<\/td>0.35 GB<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n

It is obvious that XGB out-performs LGB in accuracy under the same settings. But, Microsoft is right for 2 things: LGB is fast in speed and light in memory usage. While XGB needs 2 hours to handle 30 million records, LGB just needs 15 mins. And LGB can use 10% memory of XGB to do the same task, which is way impressive.<\/p>\n

Since model tuning<\/a> is an important part in machine learning, a fast and light model is definitely an advantage for us to get a better outcome.<\/p>\n

LGB and XGB in action, part 2<\/h3>\n

Now we switch to our next testing round, a smaller dataset, the Iowa House Prices dataset<\/a>. This time, we use regressor models and run 30 and 100 -fold cross validation with following settings:<\/p>\n

lgb_model = lgb.LGBMRegressor(learning_rate = 0.05, \r\n                              num_leaves = 65, \r\n                              n_estimators = 600)\r\n\r\nxgb_model = xgb.XGBRegressor(learning_rate=0.05,\r\n                            max_depth = 6, \r\n                            n_estimators = 600)\r\n<\/pre>\n

And here come the results again:<\/p>\n\n\n\n\n\t\n\n\t\n\t
# of Record<\/th># of Feature<\/th>K fold<\/th>Run time for XGB<\/th>Run time for LGB <\/th>RMSD for XGB<\/th>RMSD for LGB<\/th>RAM usage for XGB<\/th>RAM usage for LGB<\/th>\n<\/tr>\n<\/thead>\n
1,456<\/td>78<\/td>30<\/td>67.14 seconds<\/td>73.61 seconds<\/td>0.13133<\/td>0.13296<\/td>5.86 MB<\/td>7.08 MB<\/td>\n<\/tr>\n
1,456<\/td>78<\/td>100<\/td>325.63 seconds <\/td>354.41 seconds <\/td>0.13073<\/td>0.13483<\/td>6.60 MB<\/td>7.13 MB<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n\n

In a smaller dataset, XGB runs better than LGB in term of speed, accuracy and memory usage. When we train our models more, we have a more accurate XGB model and a less accurate LGB model.<\/p>\n

\"what\"<\/p>\n

Actually, it is a feature of gradient boosting. Unlike other decision tree models which use level-wise growth\u00a0algorithm during model training.<\/p>\n

\"depth-first<\/p>\n

(image source:\u00a0https:\/\/github.com\/Microsoft\/LightGBM\/<\/a>)<\/p>\n

LGB uses leaf-wise\u00a0algorithm instead. When the dataset is smaller, it is sensitive to variations and may create extra leafs, making it is overfitting for testing data.\"best-first<\/p>\n

You may ask, why there is no such thing on XGB, as it is a gradient boosting model as well? Yes, XGB is a gradient boosting model, but unless explicitly set, XGB uses level-wise\u00a0algorithm on smaller dataset.<\/p>\n

Conclusion<\/h3>\n

Both LGB and XGB are powerful and winning models in data science competitions. For larger dataset, if you have enough time and resources, go XGB. Otherwise, LGB would be a better choice for tuning and tweaking. For smaller dataset, let’s go XGB way to avoid LGB’s overfitting.<\/p>\n

What have we learnt in this post?<\/h3>\n
    \n
  1. Overview of LGB<\/li>\n
  2. What is Gradient Boosting<\/li>\n
  3. LGB and XGB comparison<\/li>\n
  4. LGB’s overfitting on smaller dataset<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"

    Last time, we tried the Kaggle’s TalkingData Click Fraud Detection challenge. And we used limited resources to handle a 200 million records sized\u00a0dataset. Although we can make our classification with Random Forest model, we still want a better scoring result.\u00a0 Inside the Click Fraud Detection challenge’s leaderboard, I find that most of the high scoring […]<\/p>\n","protected":false},"author":1,"featured_media":1157,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_newsletter_tier_id":0,"jetpack_publicize_message":"","jetpack_is_tweetstorm":false,"jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","enabled":false}}},"categories":[18],"tags":[19,83,82,84,28],"jetpack_publicize_connections":[],"yoast_head":"\nLGB, the winning Gradient Boosting model ⋆ Code A Star<\/title>\n<meta name=\"description\" content=\"From recent Kaggle's Data Science competitions, most of the high scoring outputs are came from LightGBM (Light Gradient Boosting Machine).\u00a0Let's find out the secret of LGB and why it can win over other models.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"LGB, the winning Gradient Boosting model ⋆ Code A Star\" \/>\n<meta property=\"og:description\" content=\"From recent Kaggle's Data Science competitions, most of the high scoring outputs are came from LightGBM (Light Gradient Boosting Machine).\u00a0Let's find out the secret of LGB and why it can win over other models.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/\" \/>\n<meta property=\"og:site_name\" content=\"Code A Star\" \/>\n<meta property=\"article:publisher\" content=\"codeastar\" \/>\n<meta property=\"article:author\" content=\"codeastar\" \/>\n<meta property=\"article:published_time\" content=\"2018-06-01T18:55:22+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/i0.wp.com\/www.codeastar.com\/wp-content\/uploads\/2018\/06\/gradient.png?fit=1033%2C608&ssl=1\" \/>\n\t<meta property=\"og:image:width\" content=\"1033\" \/>\n\t<meta property=\"og:image:height\" content=\"608\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Raven Hon\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@codeastar\" \/>\n<meta name=\"twitter:site\" content=\"@codeastar\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Raven Hon\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/\"},\"author\":{\"name\":\"Raven Hon\",\"@id\":\"https:\/\/www.codeastar.com\/#\/schema\/person\/832d202eb92a3d430097e88c6d0550bd\"},\"headline\":\"LGB, the winning Gradient Boosting model\",\"datePublished\":\"2018-06-01T18:55:22+00:00\",\"dateModified\":\"2018-06-01T18:55:22+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/\"},\"wordCount\":936,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.codeastar.com\/#\/schema\/person\/832d202eb92a3d430097e88c6d0550bd\"},\"keywords\":[\"Data Science\",\"Gradient Boosting\",\"LGB\",\"XGB\",\"XGBoost\"],\"articleSection\":[\"Learn Machine Learning\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/\",\"url\":\"https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/\",\"name\":\"LGB, the winning Gradient Boosting model ⋆ Code A Star\",\"isPartOf\":{\"@id\":\"https:\/\/www.codeastar.com\/#website\"},\"datePublished\":\"2018-06-01T18:55:22+00:00\",\"dateModified\":\"2018-06-01T18:55:22+00:00\",\"description\":\"From recent Kaggle's Data Science competitions, most of the high scoring outputs are came from LightGBM (Light Gradient Boosting Machine).\u00a0Let's find out the secret of LGB and why it can win over other models.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.codeastar.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"LGB, the winning Gradient Boosting model\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.codeastar.com\/#website\",\"url\":\"https:\/\/www.codeastar.com\/\",\"name\":\"Code A Star\",\"description\":\"We don't wish upon a star, we code a star\",\"publisher\":{\"@id\":\"https:\/\/www.codeastar.com\/#\/schema\/person\/832d202eb92a3d430097e88c6d0550bd\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.codeastar.com\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":[\"Person\",\"Organization\"],\"@id\":\"https:\/\/www.codeastar.com\/#\/schema\/person\/832d202eb92a3d430097e88c6d0550bd\",\"name\":\"Raven Hon\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.codeastar.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/i0.wp.com\/www.codeastar.com\/wp-content\/uploads\/2018\/08\/logo70.png?fit=70%2C70&ssl=1\",\"contentUrl\":\"https:\/\/i0.wp.com\/www.codeastar.com\/wp-content\/uploads\/2018\/08\/logo70.png?fit=70%2C70&ssl=1\",\"width\":70,\"height\":70,\"caption\":\"Raven Hon\"},\"logo\":{\"@id\":\"https:\/\/www.codeastar.com\/#\/schema\/person\/image\/\"},\"description\":\"Raven Hon is\u00a0a 20 years+ veteran in information technology industry who has worked on various projects from console, web, game, banking and mobile applications in different sized companies.\",\"sameAs\":[\"https:\/\/www.codeastar.com\",\"codeastar\",\"https:\/\/twitter.com\/codeastar\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"LGB, the winning Gradient Boosting model ⋆ Code A Star","description":"From recent Kaggle's Data Science competitions, most of the high scoring outputs are came from LightGBM (Light Gradient Boosting Machine).\u00a0Let's find out the secret of LGB and why it can win over other models.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/","og_locale":"en_US","og_type":"article","og_title":"LGB, the winning Gradient Boosting model ⋆ Code A Star","og_description":"From recent Kaggle's Data Science competitions, most of the high scoring outputs are came from LightGBM (Light Gradient Boosting Machine).\u00a0Let's find out the secret of LGB and why it can win over other models.","og_url":"https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/","og_site_name":"Code A Star","article_publisher":"codeastar","article_author":"codeastar","article_published_time":"2018-06-01T18:55:22+00:00","og_image":[{"width":1033,"height":608,"url":"https:\/\/i0.wp.com\/www.codeastar.com\/wp-content\/uploads\/2018\/06\/gradient.png?fit=1033%2C608&ssl=1","type":"image\/png"}],"author":"Raven Hon","twitter_card":"summary_large_image","twitter_creator":"@codeastar","twitter_site":"@codeastar","twitter_misc":{"Written by":"Raven Hon","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/#article","isPartOf":{"@id":"https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/"},"author":{"name":"Raven Hon","@id":"https:\/\/www.codeastar.com\/#\/schema\/person\/832d202eb92a3d430097e88c6d0550bd"},"headline":"LGB, the winning Gradient Boosting model","datePublished":"2018-06-01T18:55:22+00:00","dateModified":"2018-06-01T18:55:22+00:00","mainEntityOfPage":{"@id":"https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/"},"wordCount":936,"commentCount":0,"publisher":{"@id":"https:\/\/www.codeastar.com\/#\/schema\/person\/832d202eb92a3d430097e88c6d0550bd"},"keywords":["Data Science","Gradient Boosting","LGB","XGB","XGBoost"],"articleSection":["Learn Machine Learning"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/","url":"https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/","name":"LGB, the winning Gradient Boosting model ⋆ Code A Star","isPartOf":{"@id":"https:\/\/www.codeastar.com\/#website"},"datePublished":"2018-06-01T18:55:22+00:00","dateModified":"2018-06-01T18:55:22+00:00","description":"From recent Kaggle's Data Science competitions, most of the high scoring outputs are came from LightGBM (Light Gradient Boosting Machine).\u00a0Let's find out the secret of LGB and why it can win over other models.","breadcrumb":{"@id":"https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.codeastar.com\/lgb-winning-gradient-boosting-model\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.codeastar.com\/"},{"@type":"ListItem","position":2,"name":"LGB, the winning Gradient Boosting model"}]},{"@type":"WebSite","@id":"https:\/\/www.codeastar.com\/#website","url":"https:\/\/www.codeastar.com\/","name":"Code A Star","description":"We don't wish upon a star, we code a star","publisher":{"@id":"https:\/\/www.codeastar.com\/#\/schema\/person\/832d202eb92a3d430097e88c6d0550bd"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.codeastar.com\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":["Person","Organization"],"@id":"https:\/\/www.codeastar.com\/#\/schema\/person\/832d202eb92a3d430097e88c6d0550bd","name":"Raven Hon","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.codeastar.com\/#\/schema\/person\/image\/","url":"https:\/\/i0.wp.com\/www.codeastar.com\/wp-content\/uploads\/2018\/08\/logo70.png?fit=70%2C70&ssl=1","contentUrl":"https:\/\/i0.wp.com\/www.codeastar.com\/wp-content\/uploads\/2018\/08\/logo70.png?fit=70%2C70&ssl=1","width":70,"height":70,"caption":"Raven Hon"},"logo":{"@id":"https:\/\/www.codeastar.com\/#\/schema\/person\/image\/"},"description":"Raven Hon is\u00a0a 20 years+ veteran in information technology industry who has worked on various projects from console, web, game, banking and mobile applications in different sized companies.","sameAs":["https:\/\/www.codeastar.com","codeastar","https:\/\/twitter.com\/codeastar"]}]}},"jetpack_featured_media_url":"https:\/\/i0.wp.com\/www.codeastar.com\/wp-content\/uploads\/2018\/06\/gradient.png?fit=1033%2C608&ssl=1","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p8PcRO-gM","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/www.codeastar.com\/wp-json\/wp\/v2\/posts\/1040"}],"collection":[{"href":"https:\/\/www.codeastar.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.codeastar.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.codeastar.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.codeastar.com\/wp-json\/wp\/v2\/comments?post=1040"}],"version-history":[{"count":45,"href":"https:\/\/www.codeastar.com\/wp-json\/wp\/v2\/posts\/1040\/revisions"}],"predecessor-version":[{"id":1158,"href":"https:\/\/www.codeastar.com\/wp-json\/wp\/v2\/posts\/1040\/revisions\/1158"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.codeastar.com\/wp-json\/wp\/v2\/media\/1157"}],"wp:attachment":[{"href":"https:\/\/www.codeastar.com\/wp-json\/wp\/v2\/media?parent=1040"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.codeastar.com\/wp-json\/wp\/v2\/categories?post=1040"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.codeastar.com\/wp-json\/wp\/v2\/tags?post=1040"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}