posted about 9 hours ago on hacker noon
As web developers in a world with complex requirements and looming deadlines, we’ll need all the help we can get. If you have trouble turning these requirements into code that works and fixes major bugs, your deployment date could be pushed back a lot. \ Also, the more delays you have, the more likely you will fall behind competitors or lose customers. And we haven’t even talked about the burdens that come with software maintenance yet. \ Thankfully, as you might already know, there’s a much easier way to accomplish your development goals: using front-end development frameworks. But which framework should you choose? And how can these frameworks help you? \ These are all questions that we will answer in this article. Before we get to that, though, let’s review what a front-end development framework is. What Are Front-End Development Frameworks? Frameworks are code libraries that have already been written. You can add them to your application to improve its structure, abstraction, maintenance, and development. \ So, front-end frameworks are just frameworks used for building the interface of a website or application. You can think of them as blueprints to build your code upon. Examples of such frameworks are: \ React Angular Vue.js Ember.js Bootstrap Django \ They make it easy for web developers to make beautiful, responsive interfaces, easy-to-use routing and navigation, and code that works well. Not only are they capable of that, but they also let developers work faster and cleaner. \ Put, a front-end framework is a modern web developer’s best friend. And like any good friend, a framework will save you on tough days. So, what exactly can they bring to your application and development process? What Are the Benefits of Using These Frameworks? Front-end frameworks have become an integral part of web development in recent years. Some people might find it hard to imagine how to build a big website or app without a front-end framework. That’s because of the numerous benefits they bring. \ Although using a framework in your projects is not required, it will greatly help, especially in bigger applications. \ In this section, we'll discuss why using a front-end framework is a good idea when making web apps. 1. They’re a Huge Timesaver. The “why reinvent the wheel?” notion also applies to front-end frameworks. That’s because if your projects get bigger in terms of pages or functionality, you’ll spend more time coding. \ And then you’ll probably think, “Oops, my code’s a mess." It’s time to refactor it and add more abstraction layers. \ Moreover, you could also think about making your code reusable for future projects to be efficient in the long run. Finally, after weeks or months of improving your code’s readability and structure, you will have created a framework or library. A not-so-mature or an incomplete one, at that. \ All that time spent on extra coding work could have been better spent on other features of your website instead. If you use a framework, you’ll be more focused on the more important aspects of your project. \ Hence, unless your project is personal or very small and non-commercial, you should probably use a framework. They Provide Superb Routing and Navigation. Users appreciate good web navigation and routing. For them, it could mean not loading or refreshing the website after making small changes. One of the best qualities of modern websites and apps is their ability to make as few changes as possible. \ This is because the more changes you make after a user action, the more server calls and resources you’ll need. This leads to slower performance and a much more frustrating user experience. \ Luckily, most front-end frameworks have built-in routing. But can’t you handle that using plain HTML, JavaScript, and AJAX calls? You can, but routing in most frameworks allows you to do much more. \ With it, you can set up routing options like how to handle navigation errors, how to check the router's state, and even how to restore canceled navigation. It lets your website behave like a mobile application, with a single page that changes views depending on certain conditions. \ Frameworks with routing capabilities also allow developers to create complicated page navigation and keep track of variables and data more easily. 3. They Let You Use Data Binding. Data binding is one of the best perks of having a front-end framework. It refers to a technique in which you bind a data source to a data target, which will automatically take the source’s value. Whatever changes happen to the data source are then applied to the data target. \ In other words, you won’t have to explicitly update the target every time the source’s value changes. Instead, you only need to bind the source and target once, and that’s it. \ For example, let’s say you have a text input field. Normally, to get its value, you’d do something like \ var myText = document.getElementById(‘myInput’).value; \ However, whenever the input field’s value changes, you must call this line of code again. On the other hand, data binding allows you to declare an HTML element as a data source (e.g., {{myInput}}). After that, you can bind the target (e.g., var myText) to the source. 4. They’re Reusable. Front-end frameworks are readily available and continuously maintained. Large teams of experienced developers have worked on them to help others write code faster and better. \ Thus, you can keep using them across different projects. This helps your team stay more efficient when handling multiple or successive projects. \ Other benefits might not be included here, but these are some of the best ones. Once you have your framework of choice, developing your application should be much easier. However, here comes the not-so-easy part. \ With so many front-end frameworks to choose from, it might be hard to find the right one. In the next section, we’ll help you choose between the two most popular and well-loved front-end frameworks. Choosing Between Angular and React The open-source frameworks Angular and React are two of the most popular and widely used ones on the market today. This is because they work well with other programs, are fast, have many features, have a large user base, and more. \ They both get the job done as front-end frameworks but have differences and unique properties. \ For instance, Angular is based on TypeScript, while React is based on JavaScript. There are areas in development where one framework outshines the other. \ Because they both increase a developer’s efficiency while improving code structure, you may find it hard to choose between them. To help you decide, continue reading below and learn more about these two frameworks. Angular Angular is a TypeScript-based front-end framework that was developed by Google in 2016. It stemmed from AngularJS, a now-archived JavaScript-based framework by the same company and team. \ In its first few years, AngularJS faced various limitations, which led to Google rewriting the framework completely. \ And from there came Angular 2.0, the first version of the TypeScript-based Angular. It started off confusing to developers, especially those who used AngularJS at first. But when it hit maturity a few years later, Angular became one of the most popular frameworks. \ Anyway, that’s enough about history. You’re here to know what these frameworks can do, right? Pros: Component-based, which means that functionalities are encapsulated into different and separate components. In other words, you can use only the necessary components and reuse any component in other projects. MVC (Model-View-Controller) architecture separates an application into three aspects: namely the data, the interface, and the logic that connects the other two. This contributes to the readability and easier maintenance of the code. Libraries for routing, forms, server calls, and other requirements Two-way data binding (data target and data source are synchronized automatically without additional/explicit coding) Cross-platform can even be used for building mobile apps using Cordova or Ionic. Perfect for SPAs (Single-Page Applications) Templates that allow you to build views quickly High performance It has Angular CLI, a command-line interface where you can create projects, add components, perform testing, and deploy more efficiently. Comprehensive documentation and support by Google Ranked as the 5th most popular framework on Stack Overflow’s 2022 Developer Survey Cons: Difficult to migrate from AngularJS The higher learning curve, when compared to other front-end frameworks, is not suitable for beginners Its MVC architecture means your code looks better, but you’ll also have to create and maintain more code. Ranked 9th (out of 25) among the most dreaded front-end frameworks on the 2022 Stack Overflow survey (47.73% seemed to dread it) React Okay, before we discuss React, let’s get one thing straight: React is actually a library, not a framework. This is because React relies on pre-made third-party packages instead of built-in features. \ This gives React more flexibility and freedom compared to the blueprint-like mindset of frameworks. \ But why is it always compared to other frameworks like Angular? It could be because it made component-based architecture popular, which led other frameworks, like Angular, to do the same thing. \ React is a JavaScript-based library made by Meta (then called Facebook) and used by them. It was made available to the public in 2013. Its modularity makes it easier for developers to add, remove, and maintain certain app functionalities. \ Read on below to learn more about the advantages and disadvantages of using React. Pros: Component-based Virtual DOM-based approach. Virtual DOM is a concept wherein libraries or frameworks create custom objects in place of actual DOM elements. These libraries or frameworks then manipulate these objects without changing the true DOM. To understand this better, think of the virtual DOM as a draft and the actual DOM as a published article. And it’s much easier to make changes to a draft than a published article, right? This approach makes React way faster than most libraries and frameworks. Components and packages for routing, forms, server calls, and other requirements One-way data binding (relies on an HTML event to update the value of a data target based on a source, i.e., components are not updated automatically) Cross-platform Perfect for web apps that need top-notch performance and more flexibility Extremely fast and lightweight Comprehensive documentation and support by Meta Easy to learn and migrate to Developers can spend less time learning and more time working on their core features. Easy to integrate with other tools (e.g., a WYSIWYG editor) In the 2022 Stack Overflow Developer Survey, React ranks as the 1st most wanted, 6th most loved, and 2nd most popular web technology. Cons: Because it’s a library, React has many more ways to solve a problem than frameworks. This might lead to indecision or confusion. Its lack of MVC architecture means that the view is mixed with the logic, making frameworks better in terms of readability and maintenance. \ So, which one should you use? The answer is that it depends entirely on your needs. Are you working on a SPA with app-like behavior? Do you want to prioritize code structure and readability? Are you willing to put in extra effort and time to learn a framework? \ Then maybe you should go with Angular. \ On the other hand, do you want fast, modern, and easily customizable applications? Do you want to start working right away with minimal learning or training? Do you want more control and flexibility in developing your web app? \ Then React might suit your needs better. But whichever of you choose, you’ll have a better development experience than choosing none. Conclusion This article compares two powerful open-source front-end technologies, Angular and React. To understand the importance of frameworks, we looked at how using such technologies takes web development to the next level. \ Furthermore, we explored Angular's and React's history, features, benefits, and disadvantages. \ In the end, the choice between Angular and React is yours. Assess your needs, take time to learn the capabilities of these technologies, use this guide, and try them out. Because any amount of time you invest in learning these frameworks (and libraries) will be worth it.

Read More...
posted about 9 hours ago on hacker noon
The success of a website’s performance is directly tied to the amount of traffic it receives. While the number of visitors to a website is not always indicative of success, it is essential to understand the data and insights behind each traffic report. By unlocking the secrets of your website traffic, you can gain a better understanding of when, how, and why visitors arrive. Analyzing the data from website traffic reports can help you understand the performance of your website, identify areas for improvement, and make decisions about how to optimize content and design for better user engagement. With the right knowledge and analysis, you can make informed decisions about how to best reach and engage with your target audience and maximize the success of your website.Understanding Website Traffic ReportsA traffic report is a summary of website traffic metrics, such as the number of visitors, sources, and pages viewed. Site traffic reports can be found in the analytics dashboard of your website. Each report will provide insight into different aspects of your traffic, including sources, keywords, page views, exit pages, and more. While each website traffic report will have different metrics, they will all provide information on the following: - The number of visits - The number of unique and returning visitors - The amount of time spent on your website - The bounce rate - The pages viewed - The location of your visitors and source of traffic reports can be segmented by day, week, or month and for each device. You can also view data aggregated for a specific period of time, such as the past 30 days or since the website went live.Understanding the types of website trafficWebsite traffic is classified into two main categories: - Targeted Organic traffic: This type of traffic is generated from search engine queries. It also includes referrals from social media and other websites. - Paid traffic: This type of traffic is generated from advertisements, sponsored content, and links from paid-for directories and websites. The type of website traffic you see on your website will change over time, especially in the first few months after the website is live. This is because of the changing search engine algorithms and the amount of effort and time it takes to rank for keywords. It’s important to monitor changes in the traffic sources to see if there are any trends or anomalies. By doing so, you can identify areas for improvement and make necessary changes to increase the volume of website traffic.Analyzing Website Data to Identify TrendsWebsite traffic data can help you identify trends and see how changes in your marketing strategy are impacting the performance of your website.By analyzing the data over the course of several months, you can get a better understanding of the trends and patterns behind the data. You can segment the data to compare different time periods. For instance, you can compare the data from the month before you implemented a new marketing strategy to the data from the month after the implementation. When Analyzing Data, Try to Answer the Following Questions:- What trends are emerging in terms of the number of visitors, sources, and pages? - What content is most popular and why? - Are certain pages seeing an increase or decrease in traffic? - What keywords are driving the most traffic?How to Use Website Traffic Data to Improve User EngagementWebsite traffic data can be used to understand where your visitors are dropping off and how you can improve their experience with your content. By analyzing the data, you can identify areas for improvement in terms of user engagement, such as bounce rate and pages viewed. You can then use this information to make changes to your content and design to improve user engagement. For example, if a high percentage of visitors are leaving your website after only viewing one page, you may want to consider adding additional content to encourage visitors to stay longer and engage with your brand. You can also use website traffic data to optimize content. For example, if you notice that a certain keyword is bringing in a significant amount of traffic, but a low percentage of visitors are clicking through to your website, you can optimize your content to increase the number of organic clicks.How to Use Website Traffic Data to Optimize Content and DesignAnalyzing website data can help you identify areas for improvement in terms of content and design. Website traffic data will tell you what pages are receiving the most traffic, but it may not tell you why. For example, you may notice that you’re About page receives the highest number of visits, but it’s unclear why. As a result, you may want to focus on optimizing the content of the About page. There Are Many Ways to Optimize Content and Design Using Website Traffic Data. You Can:- Test different variations of your website content - Use website traffic data to create user personas - Create a content strategy to increase engagement and conversion - Optimize your website design to maximize the number of visitors seeing your content - Create a social media content strategy to grow your online presenceBest Practices for Analyzing Website Traffic- Before making any major changes to your website or content, wait at least a few months to get a better understanding of the traffic trends. This will help you make more informed decisions about any potential changes. - Be patient. Traffic patterns often take time to change. Be sure to give your website enough time to start seeing an increase in traffic before concluding that your strategy isn't working. - Make sure you're collecting the right data. It's important to verify that you're looking at the right metrics and that they're being calculated correctly. - Be open to change. Achieving the right balance can take time, so be patient and open to making adjustments along the way. - When in doubt, ask for help. If you're facing an analytics data crisis and can't make sense of the numbers, reach out to your team and ask for help! They've likely dealt with similar issues before and can help you get the answers you need.Benefits of Understanding Website TrafficBy understanding how traffic to your website is generated and how it flows through your site, you can make better decisions about your content and design, and you can target your audience more effectively. Knowing how many people visit your site and from what sources will give you insight into which of your marketing efforts are working and which need improvement. It will also help you understand what type of user is visiting your site. This information can then be used to tailor future marketing efforts and ensure that you are reaching the right audience. Additionally, website traffic reports can help you identify areas for improvement in terms of content and design. Through these reports, you can identify pages that receive the most traffic and find out what percentage of users are clicking through to other pages on your site.ConclusionWebsite traffic reports are a great way to measure the success of your website. By understanding what the reports indicate, you can gain a better understanding of when, how, and why visitors arrive. Analyzing website traffic will help you understand the performance of your website, identify areas for improvement, and make decisions about how to optimize content and design for better user engagement. With the right knowledge and analysis, you can make informed decisions about how to reach and engage with your target audience and maximize the success of your website.

Read More...
posted about 10 hours ago on hacker noon
“Things are only impossible until they’re not.”–Captain Jean-Luc Picard Congratulations Captain! 🎉 🎉🎉 You have won the following award/awards:https://www.noonies.tech/2022/web3/2022-hackernoon-contributor-of-the-year-dao :::tip share links to your winning URL here, for example https://noonies.tech/2022/internet-heroes/2022-who-is-the-real-dr-leonard-mccoy, OR https://noonies.tech/2022/internet-heroes/2022-who-is-the-real-dr-leonard-mccoy/winner ::: What does it mean for you to win this title? I am profoundly excited to win in this category and I consider this to be an encouragement for me to work harder and not rest on my laurels. How do you or your company intend to embrace the responsibility of this title in 2023? Writing about DAOs(Decentralised Autonomous Organisations)that are aspiring to great heights is a responsibility in which I have found immense pleasure. I write to inform, educate, and inspire others to imagine a world run by DAOs. \ I have written about how MoonDAO became the first DAO to send a human to outer space and I also have articles exploring Ukraine DAO and Domain DAO. In 2023, I am going to double down on spotlighting the potential of DAOs and why they are the future of an interconnected humanity. What goals are you looking forward to accomplishing in 2023 (whether it be through company initiatives or your personal journey)? I hope to build my personal brand and create job opportunities for many people in the web3 space. Which trend(s) are you most excited about in 2023? Share your reason. I have gone all in and blockchain technology has won me over. I am not looking back. I am an ardent believer in its potential to change the course of history within a short window of time. 2022 had been crazy, especially in Tech - what with layoff, web3 fraud, and AI! Which trend are you most concerned about? What solutions can you think of? Be as brief or as detailed as you like. The image problem of the crypto world is a serious problem, to say the least. But it is really comforting to see the passion many are bringing to the web3 space innovating its way to success. There is still room for improvement in the prevention of crypto fiascos like the Terra Luna and FTX. Share your biggest success so far and/or your biggest failure so far. Winning two Hackernoon awards (DAO and FUNDING) are my personal achievements so far. I am also glad that I am able to put my talent and time into good use which helps me to make money and network with others. We would love your feedback on HackerNoon as a tech publication! How has your experience been with us? Hackernoon means a lot to me. I don’t skip my daily routine of exploring its exhilarating contents and sharing them with my contacts online. Any words of wisdom you’d like to share with us? Face your fear. Step out and conquer the world. \ :::info The 2022 Noonies are sponsored by: .Tech Domains by Radix, and BingX. You will be receiving a .Tech Domain for life as well as an official HackerNoon NFT! ::: \

Read More...
posted about 10 hours ago on hacker noon
The Palmer Method of Business Writing, by A. N. Palmer is part of the HackerNoon Books series. You can jump to any chapter in this book here. Lesson 100LESSON 100Be sure to fix in mind the image of the letter before attempting it. Study closely the proportions and the direction of every stroke. Make about fifty capital R’s to the minute, as given in the next page.Do not forget that your advancement depends upon movement, and that movement depends much upon position. The body should be self-supporting, with the feet resting squarely on the floor, and should not crowd against the desk; the right arm should be well out from the side; the right hand well in front of the eyes; and the paper twelve or fourteen inches from the eyes.If the wrist or side of the hand rests on the paper, all motion coming from the muscles of the arm will stop at the wrist and it will be an impossibility to use muscular movement. Watch the wrist and the side of the hand closely. Remember that the propelling power is above the elbow, in the upper arm and shoulder.Write line after line of the word “Running” with a light, quick motion, and compare with the copy frequently. Twelve to fourteen words should be written to the minute.Drill 122About HackerNoon Book Series: We bring you the most important technical, scientific, and insightful public domain books. This book is part of the public domain.Palmer, A. N. 2021. The Palmer Method of Business Writing. Urbana, Illinois: Project Gutenberg. Retrieved December 2022 from https://www.gutenberg.org/files/66476/66476-h/66476-h.htmThis eBook is for the use of anyone anywhere at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.org, located at https://www.gutenberg.org/policy/license.html.

Read More...
posted about 11 hours ago on hacker noon
\ :::info Any views expressed in the below are the personal views of the author and should not form the basis for making investment decisions, nor be construed as a recommendation or advice to engage in investment transactions. ::: The question at hand is whether the current price for Bitcoin is at the “bottom”. Bitcoin is the purest and most battle-tested form of crypto money – and while it may not fall the most, its role as crypto’s reserve asset will ensure that it’s Bitcoin that leads us out of the shadow of darkness. Therefore, we must focus on Bitcoin’s price action to divine whether this market’s bottom has occurred or not. \ There are three cohorts that were forced to puke their Bitcoin into the righteous hands of the true believers: the centralized lending and trading firms, Bitcoin mining operations, and ordinary speculators. In every case, misuse of leverage – whether it was in their business operating model or they used it to finance their trades – was the cause of the liquidations. With short-term US Treasury yields moving from 0% in Q3 2021 to 5% at present, everyone has suffered bigly for their uber-bullish convictions. \ After walking through how leverage destroyed each cohort’s position as rates rose, I will then explain why I think they have no more Bitcoin left to sell – And why, therefore, at the margin, we likely already hit the lows of this cycle during the recent FTX / Alameda catastrophe. \ In the final section of this essay, I will then lay out the way in which I plan to trade this possible bottom. To that end, I recently participated in a webinar with my macro daddy Felix Zulauf. At the end of the broadcast, he said something that hit home. He said that investors and traders need to be concerned with recognizing the tops and bottoms, but that most focus on the noise in the middle, and that calling a bottom is usually a fool's errand. Since I’m embarking on that very fool’s errand, I intend to try to call it in a way that protects my portfolio, with the maximum amount of cushion to be wrong on the level and/or timing. \ With that in mind, let’s dive in. Bankruptcy Order of Operations Most of us are probably not as gifted as Caroline Ellison, so we had to learn maths the hard way. Do you remember PEMDAS? It’s the acronym that describes the order of operations when solving equations: \ P - Parentheses E - Exponents M - Multiplication D - Division A - Addition S - Subtraction \ The fact that I still remember this acronym many decades after first learning it speaks to its sticking power. \ But equations aren’t the only thing with a static order of operations – bankruptcies (and the contagion that follows) occur in a very specific order, too. Let me start by explaining what that order looks like, and why it occurs in that sequence. \ Before I do, though, I want to acknowledge that no one wants or intends to go bankrupt. So, I apologize in advance if I come across as insensitive to the strife of those who lost money because of Sam “I mislabeled my bank accounts” Bankman-Fried (SBF). But, this scammer just keeps opening his mouth and saying dumb shit that he needs to be called out for – so the rest of this essay will be peppered with references to our “right kind of white” boy and the sad melodrama he is responsible for. Now, let’s get back to it. \ Centralized lending firms (CEL) usually go bankrupt because they either lent money to entities that can’t pay them back, or they have duration mismatches in their lending books. Duration mismatches occur because the lenders receive deposits that can be recalled by their depositors on a short time frame, but they make loans using those deposits on a longer time frame. If the depositors want their money back or demand a higher rate of interest due to changing market conditions, then the CEL – absent an injection from some white knight firm – becomes insolvent and bankruptcy quickly follows. \ Before a CEL becomes insolvent or goes bankrupt, they will attempt to raise funds to ameliorate the situation. The first thing they will do is call all loans that they can. This mainly affects anyone who borrowed money from them with a short time horizon. \ Imagine you are a trading firm that borrowed money from Celsius – but within a week, Celsius asks for those funds back, and you have to oblige. As a trading firm, getting recalled in a bull market is no biggie. There are plenty of other CELs who will lend you funds so that you don’t have to liquidate your existing positions. But when the bull market fades and there’s a market-wide credit crunch, all CELs typically recall their loans at around the same time. With no one to turn to for additional credit, trading firms are forced to liquidate their positions to meet capital calls. They will liquidate their most liquid assets first (i.e., Bitcoin and ETH), and hopefully their portfolio doesn’t contain too many illiquid shitcoins like Serum, MAPS, and Oxygen (cough Alameda and 3AC cough). \ After a CEL recalls all the short-term loans that it can, it will begin liquidating the collateral that underpins its loans (assuming it actually asked for any – looking at you, Voyager). In the crypto markets, the biggest collateralized lending category prior to the recent implosions was loans secured by Bitcoin and Bitcoin mining machines. So once things start to go south, CELs start by selling Bitcoin, as it’s the asset most used to collaterlize loans AND it's the most liquid cryptocurrency. They also turn to the mining firms that they have lent to and ask them to pony up either Bitcoin, or their mining rigs – but if those CELs don’t operate a data center with cheap electricity, the mining rigs are about as useful as SBF’s accounting skills. \ So while the credit crunch is ongoing, we see large physical sales of Bitcoin hitting the centralized and decentralized exchanges from both a) CELs trying to avoid bankruptcy by selling the Bitcoin they have received as collateral, and b) trading firms who have seen their loans recalled and must liquidate their positions. This is why the price of Bitcoin swoons BEFORE CELs go bankrupt. That’s the big move. The second move down – if there is one – is driven by the fear that occurs when firms which were once thought to be unshakable suddenly start posturing as zombies that are on the cusp of liquidating their assets. This tends to be a smaller move, as any firms at risk of bankruptcy are already busy liquidating Bitcoin so that they can survive the crash. The above chart of Binance’s BTC/BUSD trading volume illustrates that volumes spiked during the two credit crashes of 2022. It is in this span of time that all these once storied firms bit the dust. \ To summarize, as CELs transition from solvency, to insolvency, to bankruptcy, these other ecosystem players are affected: \ Trading firms who borrowed short-term money from CELs and saw their loans recalled. Bitcoin mining firms who borrowed what was typically fiat collateralized by either Bitcoin on their balance sheet, future Bitcoin to be mined, and/or Bitcoin mining rigs. \ The two largest muppet crypto trading firms, Alameda and 3AC, both grew to such a gargantuan size because of cheap borrowed money. In the case of Alameda, the polite way to put it is that they “borrowed” it from FTX customers – although others might call it theft. In the case of 3AC, they hoodwinked gullible and desperate CELs to lend them funds with little-to-no collateral. In both cases, the lenders believed these and other trading firms were engaged in super-duper-smart arbitrage trades that rendered these firms immune to the vicissitudes of the markets. However, we know now these firms were just a bunch of degen, long-only punters in meth mode. The only difference between them and the masses was that they had billions of dollars to play with. \ When these two firms got into trouble, what did we see? We saw large transfers of the most liquid cryptos – Bitcoin (WBTC in DeFi) and Ether (WETH in DeFi) – to centralized and decentralized exchanges that were then sold. This happened during the big move down. When the dust settled and neither firm could boost the asset side of their balance sheet higher than the liability side, their remaining assets consisted almost purely of the most illiquid shitcoins. Looking through the bankruptcy filings of centralized lenders and trading firms, it is not entirely obvious what crypto assets remain. The filings lump everything together. So I can’t demonstratively prove that all Bitcoin held by these failed institutions was sold during the multiple crashes, but it does look as if they tried their best to liquidate the most liquid crypto collateral they could right before they went under. \ The CELs and all large trading firms already sold most of their Bitcoin. All that is left now are illiquid shitcoins, private stakes in crypto companies, and locked pre-sale tokens. It’s irrelevant to the progression of the crypto bear market how a bankruptcy court eventually deals with these assets. I have comfort that these entities have little to no additional Bitcoin to sell. Next, let’s look at the Bitcoin miners. Bitcoin Mining Firms Electricity is priced and sold in fiat, and it is the key input to any Bitcoin mining business. Therefore, if a mining firm wants to expand, they either need to borrow fiat or sell Bitcoin on their balance sheet for fiat in order to pay their electricity bills. Most miners want to avoid selling Bitcoin at all costs, and therefore take out fiat loans collateralized by either Bitcoin on their balance sheets, yet-to-be-produced Bitcoin, or Bitcoin mining rigs. \ As Bitcoin’s price rises, lenders feel emboldened to lend more and more fiat to mining firms. The miners are profitable and have hard assets to lend against. However, the ongoing quality of the loans is directly connected to Bitcoin’s price level. If the Bitcoin price falls quickly, then the loans will breach minimum margin levels before the mining firms can earn enough income to service the loans. And if that happens, the lenders will step in and liquidate the miner’s collateral (as I described in the previous section). \ We anecdotally know this happened because the massive downturn in asset prices, particularly in the crypto bear market, have – along with rising energy prices – squeezed miners across the industry. Iris Energy is facing a default claim from creditors on $103M of equipment loans. September saw the first Chapter 11 bankruptcy from a major player, Compute North, with other big firms including Argo Blockchain (ARBK) seemingly teetering on the edge of solvency. \ But, let’s look at some charts to examine how these waves of crypto credit crunches affected the miners and what they did in response. \ Glassnode publishes an excellent chart which shows the net 30-day change in Bitcoin held by miners. \ As we can see, miners have been net selling a large amount of Bitcoin since the first credit crunch in the summer. They must do this in an attempt to stay current on their big fiat debt loads. And if they don’t have debt, they still need to pay electricity bills – and since the price of Bitcoin is so low, they have to sell even more of it to keep the facility operational. \ While we don’t – and never will – know if we have hit the maximum amount of net selling, at least we can see that the mining firms are behaving as we would expect given the circumstances. \ Some miners didn’t make it, or they had to downsize their operations. That is evident in the change in hashrate. I took the hashrate and first computed a rolling 30-day average. I then took that rolling average and looked at the 30-day change. I did this because the hashrate is quite volatile, and it needed some smoothing. In general, the hashrate has trended higher over time. But, there are periods where the 30-day growth is negative. The hashrate declined right after the summer meltdown, and then most recently plunged due to the FTX / Alameda fallout. Again, this confirms our theory that miners will downsize operations when there is no more credit available to fund their electricity bills. \ We also know that some high-cost miners had to cease operations because they defaulted on their loans. Any lender who took mining machines as collateral will likely find it difficult to make use of them, since they aren’t already in the business of operating data centers. And since they can’t use them, the lenders must then sell these machines in the secondary market, and that process takes time. This also contributes to the hashrate falling for a period of time. \ This is a chart of the price of a Bitmain S19 or other comparable mining machine with under 38 Joules (J) / Terahash (TH) efficiency. As we can see, the collateral value of an S19 has plummeted alongside the price of Bitcoin. Imagine you lent USD against these rigs. The miners you lent to tried to sell Bitcoin to provide more fiat to service your loan, but in the end couldn’t do so because marginal profitability declined. The miners then defaulted on their loans and handed over their machines — which are worth almost 80% less now than when the loan was undertaken — as repayment. We can guess that the most feverish point of loan origination was near the top of the market. Muppet lenders always buy the top and sell the bottom … every single fucking time! Now that CELs have collections of mining rigs that they can’t easily sell and can’t operate, they can try to sell them and recover some funds – but it’s going to be single digit cents on the dollar, given that new machines are trading 80% off from a year ago. They can’t operate a mining farm because they lack a data centre with cheap electricity. And that’s why the hashrate just disappears – because of an inability to turn the machines back on. \ Going forward, if we believe that most – if not all – mining loans have been extinguished, and there is no new capital to be lent to miners, then we can expect miners to sell most – if not all – of the block reward they receive. As the table above shows, if miners sold all the Bitcoin they produced each day, it would barely impact the markets at all. Therefore, we can ignore this ongoing selling pressure, as it is easily absorbed by the markets. \ I believe that the forced selling of Bitcoin by CELs and miners is over. If you had to sell, you would have already done so. There is no reason why you would hold on if you had an urgent need for fiat to remain a going concern. Given that almost every major CEL has either ceased withdrawals (pointing to insolvency at best) or gone bankrupt, there are no more miner loans or collateral to be liquidated. Small Scale Speculators These punters are your run-of-the-mill traders. While many of these individuals and firms definitely imploded, the failure of these entities would not be expected to send massive negative reverberations through the ecosystem. That being said, their behavior can still help us form a guess as to where the bottom is. \ The Bitcoin / USD perpetual swap (invented by BitMEX) is the most traded of any crypto instrument. The number of open long and short contracts – called the open interest (OI) – tells us how speculative the market is. The more speculative it is, the more leverage is being used. And as we know, when the price changes directions quickly, it leads to large amounts of liquidations. In this case, the all-time high in OI coincided with the all-time high of Bitcoin. And as the market fell, longs at the margin got liquidated or closed their losing positions, which resulted in OI falling, too. \ Taking a look at the sum of OI across all major crypto derivatives centralized exchanges, we can see that the OI local low also coincided with the sub $16,000 stab of Bitcoin on Monday November 14th. Now, the OI is back to levels not seen since early 2021. \ The timing and magnitude of the reduction of the OI leads me to believe that most of the over-leveraged long positions have been extinguished. What remains are traders using derivatives as a hedge, and those using very low leverage. This gives us a bedrock to move higher. \ Could the OI fall further as we enter the sideways, non-volatile part of the bear market? Absolutely. But the OI’s rate of change will slow, which means chaotic trading periods featuring large amounts of liquidations (particularly on the long side) are not likely to occur. Timing Re-entry What I Don’t Know \ I don’t know if $15,900 was this cycle’s bottom. But, I do have confidence that it was due to the cessation of forced selling brought on by a credit contraction. \ I don’t know when or if the US Federal Reserve will start printing money again. However, I believe the US Treasury market will become dysfunctional at some point in 2023 due to the Fed’s tightening monetary policies. At that point, I expect the Fed will turn the printer bank on, and then boom shaka-laka – Bitcoin and all other risk assets will spike higher. \ What I Do Know \ Everything is cyclical. What goes down, will go up again. \ I like earning close to 5% by investing in US Treasury bills with durations shorter than 12 months. And therefore, I want to be earning a yield while I wait for the crypto bull market to return. \ What to Do? \ My ideal crypto asset must have beta to Bitcoin, and to a lesser extent, Ether. These are the reserve assets of crypto. If they are rising, my assets should rise by at least the same amount – this is called crypto beta. This asset must produce revenue that I can claim as a token holder. And this yield must be much greater than the 5% I can earn buying 6- or 12-month treasury bills. \ I have a few super-powered assets such as GMX and LOOKS in my portfolio. This is not the essay where I go into why I will be opportunistically selling my T-bills and purchasing these during the upcoming months of the hopefully sideways bear market. But if you want to start down the path towards finding the right asset to both participate in the upside and earn income while you wait for the return of the bull market, pull up a site like Token Terminal and look at which protocols generate actual revenue. It is then up to you to investigate which protocols have appealing tokenomics. Some may earn a lot of revenue, but it is very hard for a token holder to extract their share of that revenue to their own wallet. Some protocols pay out a majority of revenue continuously, directly to token holders. The best part about some of these projects is that all things DeFi got shellacked during the two downward waves of the 2022 crypto credit crunch. Investors threw out good projects along with the bad as they rushed to raise fiat to repay loans. As a result, many of these projects trade at a truly bombed out price to fees (P/F) ratio. \ If I can earn 5% in treasuries, then I should at least earn 4x of that – i.e., 20% – when purchasing one of these tokens. A 20% per annum yield means I should only invest in projects with a P/F ratio of 5x or lower. Everyone will have a different hurdle rate, but that is mine. \ I could purchase Bitcoin and or Ether, but neither of these cryptos pays me enough yield. And if I’m not getting sufficient yield, I’m hoping that the price appreciation in fiat terms will be stupendous when the market turns. While I do believe that will occur, if there are cheaply priced protocols where I get the return profile of Bitcoin and Ether plus yield from the actual usage of the service, happy days! \ Investing at what you think is the bottom is certainly risky. You are out there all alone, spreading the good word of Satoshi against the sweet siren song of the TradFi devil and their harpies. But be not afraid, intrepid and righteous warrior, for to the faithful the spoils of war shall accrue.

Read More...
posted about 11 hours ago on hacker noon
Machine learning models are often developed in a training environment, which may be online or offline, and can then be deployed to be used with live data once they have been tested.One of the most critical talents you’ll need to have if you work on projects involving data science and machine learning is the ability to deploy a model.Model deployment is the process of integrating your model into an existing production environment. The model will receive input and predict an output. You are going to learn how to manage your machine learning project and deploy a machine learning model into production using the following open-source tools:1. DagshubIt is a web platform for data scientists and machine learning engineers to host and version code, data, experiments and machine learning models integrated with other open source tools like:Git — tracking source code and other files DVC — tracking data and machine learning modelsMLflow — tracking machine learning experiments.2. StreamlitIt is an open-source Python library for creating and sharing web applications for projects in data science and machine learning. The library can assist you in developing and deploying a data science solution in a matter of minutes using only a few lines of code.In this tutorial will cover the following topics:Create and manage your machine learning project with Dagshub.Build an ML model to classify mobile price ranges.Deploy your ML model using Streamlit to create a simple Data science web app.So let's get started.How to Create a Project using DagshubAfter creating your account on Dagshub, you will be given different options to start creating your first project with Dagshub.New Repository: Create a new repository directly on the Dagshub platform.Migrate A Repo: Migrate a repository from GitHub to Dagshub.Connect a Repo: Connect and manage your repository through both Github and Dagshub.There should be a lot of similarities between the interface of your new repository on DagsHub and the interface of your existing repository on GitHub. However, there should be some additional tabs, such as Experiments, Reports and Annotations.You can clone and give a star in this repository on DagsHub to follow along throughout the article.Mobile Price DatasetWe will use the Mobile Price dataset to classify the price range into different categories mentioned below.0 (low cost)1 (medium cost)2 (high cost)3 (very high cost)The dataset is available here.We have one available in the Data folder, data.csv.We will be splitting the data set into train and test dataframes for training and validation.Packages Installation In this project, we will use the following python packages.Pandas for data manipulation.sklearn for training machine learning algorithms.MLflow for tracking machine learning experiments.DVC (data version control) for tracking and version datasets and machine learning models.Joblib for saving and loading machine learning models.Streamlit for deploying the machine learning model in a web app.All these packages are listed in the requirement.txt file. Install these packages by running the following command in your terminal.pip install -r requirements.txtImport Python PackagesAfter installing all packages, you need to import the packages before starting to use them.# import packages import pandas as pd import numpy as np import sklearn from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.metrics import accuracy_score from sklearn.ensemble import RandomForestClassifier from sklearn.linear_model import LogisticRegression import mlflow mlflow.sklearn.autolog() # set autlog for sklearn mlflow.set_experiment('Ml-classification-experiment') import joblib import json import os np.random.seed(1234)Note: With MLflow, you can automatically track machine learning experiments by using a function called autolog() from mlflow.skearn module.Load and Version the Mobile Price Datasetraw_data = pd.read_csv("data/raw/data.csv")Data version control (DVC) is an open-source solution that allows you to track changes to your machine learning project’s data as well as its models. Following the completion of the account creation process, Dagshub will provide you with 10 GB of free storage for DVC.Within each repository, Dagshub will automatically generate a remote storage link as well as a list of commands to get your data tracking process started.Running the following command to add the Dagshub DVC remote.dvc remote add origin https://dagshub.com/Davisy/Mobile-Price-ML-Classification-Project.dvcNote: The above command will add the repository as the remote for the DVC storage and the URL will be slightly different from what you have seen.Then you can start tracking the dataset with the following command.dvc commit -f data / raw.dvcLet’s check the shape of the dataset.print(raw_data.shape)The dataset contains 21 columns(20 features and 1 target) and luckily this dataset has no missing values.Split the mobile price data into features and target. The target column is called “price_range”.features = raw_data.drop(['price_range'], axis=1) target = raw_data.price_range.valuesData PreprocessingThe features must be standardized before fitting into the machine learning algorithms. We will use Standardscaler from scikit-learn to perform the task.scaler = StandardScaler() features_scaled = scaler.fit_transform(features)The next step is to split the data into train and validate set. 20% of the mobile price dataset will be used for validation.X_train, X_valid, y_train, y_valid = train_test_split(features_scaled, target, test_size=0.2, stratify=target, random_state=1)Here is the sample of the train set (first row of X_train).print(X_train[0]) [ 1.56947055 -0.9900495 1.32109556 -1.01918398 0.15908825 -1.04396559 -1.49088996 1.03435682 0.61459469 0.20963905 1.00341448 -0.93787756 -0.57283137 -1.3169798 0.40204724 1.43112714 0.73023981 0.55964063 0.99401789 0.98609664]We need to track the processed data with DVC for efficiency and reproducibility.First, we create a dataframes for both the train set and the valid set and finally save them in a processed folder as shown in the block of code below.# create a dataframe for train set X_train_df = pd.DataFrame(X_train, columns=list(features.columns)) y_train_df = pd.DataFrame(y_train, columns=["price_range"]) #combine features and target for train set train_df = pd.concat([X_train_df, y_train_df], axis=1) # create a dataframe for traine set X_valid_df = pd.DataFrame(X_valid, columns=list(features.columns)) y_valid_df = pd.DataFrame(y_valid, columns=["price_range"]) #combine features and target for train set valid_df = pd.concat([X_valid_df, y_valid_df], axis=1) # save processed train and valid set train_df.to_csv('data/processed/data_train.csv', index_label='Index') valid_df.to_csv('data/processed/data_valid.csv', index_label='Index')Then run the following command to track the processed data (train and valid sets).dvc commit -f process_data.dvcFinally, we can save the trained standard scaler by using the dump method from the joblib package.# save the trained scaler joblib.dump(scaler, 'model/mobile_price_scaler.pkl')Note: We will use the trained scaler in the streamlit web app.Training Machine Learning AlgorithmsMLflow is a great open-source machine learning experimentation package. You can use it to package and deploy Machine learning projects but in this article, we’ll concentrate on its tracking API.We will use free tracking servers provided by Dagshub so that all MLflow files are saved remotely in the repository and anyone who can access your project will be able to view them.To send machine learning experiments results to the tracking server, you need to set the tracking URL, your Dagshub username and password as follows.Note: You just need to copy the remote tracking URL for MLflow in your Dagshub repository.# using MLflow tracking mlflow.set_tracking_uri("https://dagshub.com/Davisy/Mobile-Price-ML-Classification-Project.mlflow") os.environ["MLFLOW_TRACKING_USERNAME"] = "username" os.environ["MLFLOW_TRACKING_PASSWORD"] = "password"Note: The experiment results will be logged directly to the Dagshub repository under the Experiments tab.Finally, we need to run some machine learning experiments. First, we split features and target from both train and valid sets.# load the processed data for both train and valid set X_train = train_df[train_df.columns[:-1]] y_train = train_df['price_range'] X_valid = valid_df[valid_df.columns[:-1]] y_valid = valid_df['price_range']The first experiment is to train the Random forest algorithm on the train set and check performance on the valid test.# train randomforest algorithm rf_classifier = RandomForestClassifier(n_estimators=200, criterion="gini") with mlflow.start_run(): #train the model rf_classifier.fit(X_train, y_train) #make predictions y_pred = rf_classifier.predict(X_valid) #check performance score = accuracy_score(y_pred, y_valid) mlflow.end_run() print(score)The above block of code will perform the following tasks:Instantiate the Random forest algorithmStart the MLflow run.Train the machine learning model.Make predictions on the validation set.check the accuracy of the machine learning model.End the MLflow run.Finally, print the accuracy score of the machine learning model.The accuracy score is 0.895 for the Random forest algorithm.Note: We use the autolog function in mlflow.sklearn to automatically keep track of the experiment. This means it will automatically track model parameters, metrics, files and similar information.You can change the default parameters of the Randomforest algorithms to run multiple experiments and find out which values provide the best performance.Let’s try to run another experiment using the Logistic Regression algorithm.# train logistic regression algorithm lg_classifier = LogisticRegression(penalty='l2', C=1.0) with mlflow.start_run(): #train the model lg_classifier.fit(X_train, y_train) #make predictions y_pred = lg_classifier.predict(X_valid) #check performance score = accuracy_score(y_pred, y_valid) mlflow.end_run() print(score)The accuracy score is 0.97 for Logistic Regression. This machine learning model performs better than the Random forest algorithm.Here is the list of machine learning experiments recorded on DagsHub under the Experiments tab.The Experiments tab on the Dagshub provides different features to analyze the experiment results such as comparing one experiment to another using different metrics.You also need to track the version of the model by running the following command.dvc commit -f model.dvcRegister the Best Model with MLflowWe will use Mlflow registry to maintain and manage the version of the machine learning model. You need to know the run_id that produces the model with the best performance. You can find the run_id by clicking on the experiment name (‘Ml-classification-experiment’) within the Experiments Tab.In this example, the run_id for the logistic regression model is ‘17ccd85b4c7e491bbdbcba58b5eafae1’. Then you use the register_model() function from MLflow to perform the task.# Grab the run ID run_id = '17ccd85b4c7e491bbdbcba58b5eafae1' # Select a subpath name for the run subpath = "best_model" # Select a name for the model to be registered model_name = "Logistic Regression Model" # build the run URI run_uri = f'runs:/{run_id}/{subpath}' # register the model model_version = mlflow.register_model(run_uri, model_name)Output:Successfully registered model 'Logistic Regression Model'. 2022/11/10 00:22:33 INFO mlflow.tracking._model_registry.client: Waiting up to 300 seconds for model version to finish creation. Model name: Logistic Regression Model, version 1 Created version '1' of model 'Logistic Regression Model'.Deploy logged Model in MLflow with Streamlit Streamlit is an open-source Python toolkit for building and sharing data science web apps. You can use streamlit to deploy your data science solution in a short period of time with a few lines of code.Streamlit integrates easily with prominent python libraries like NumPy, Pandas, Matplotlib, Scikit-learn, and others in Data science.In this part, we are going to deploy the logged model in MLflow (logistic regression model) in order to classify the price range for mobile phones.Create app.py fileThe first step is to create a python file called app.py which will have all the source code to run the data science web app. Import PackagesThen you need to import packages to run both streamlit and the best trained model.# import packages import streamlit as st import pandas as pd import numpy as np from os.path import dirname, join, realpath import joblibCreate App Title and DescriptionYou can set the header, image and subheader for your data science web app using three different methods from streamlit called header(),image() and subheader() as shown in the code below.# add banner image st.header("Mobile Price Prediction") st.image("images/phones.jpg") st.subheader( """ A simple machine learning app to classify mobile price range """ )Create a Form to Receive a Mobile’s detailsWe need a simple form that will receive mobile details in order to make predictions. Streamlit has a method called a form() that can help you create a form with different fields such as number, multiple choice, text and others.# form to collect mobile phone details my_form = st.form(key="mobile_form") @st.cache # function to transform Yes and No options def func(value): if value == 1: return "Yes" else: return "No" battery_power = my_form.number_input( "Total energy a battery can store in one time measured in mAh", min_value=500 ) blue = my_form.selectbox("Has bluetooth or not", (0, 1), format_func=func) clock_speed = my_form.number_input( "speed at which microprocessor executes instructions", min_value=1 ) dual_sim = my_form.selectbox("Has dual sim support or not", (0, 1), format_func=func) fc = my_form.number_input( "Front Camera mega pixels", min_value=0 ) four_g = my_form.selectbox("Has 4G or not", (0, 1), format_func=func) int_memory = my_form.number_input( "Internal Memory in Gigabytes", min_value=2 ) m_dep = my_form.number_input( "Mobile Depth in cm", min_value=0 ) mobile_wt = my_form.number_input( "Weight of mobile phone", min_value=80 ) n_cores = my_form.number_input( "Number of cores of processor", min_value=1 ) pc = my_form.number_input( "Primary Camera mega pixels", min_value=0 ) px_height = my_form.number_input( "Pixel Resolution Height", min_value=0 ) px_width = my_form.number_input( "Pixel Resolution Width", min_value=0 ) ram = my_form.number_input( "Random Access Memory in Mega Bytes", min_value=256 ) sc_h = my_form.number_input( "Screen Height of mobile in cm", min_value=5 ) sc_w = my_form.number_input( "Screen Width of mobile in cm", min_value=0 ) talk_time = my_form.number_input( "longest time that a single battery charge will last when you are", min_value=2 ) three_g = my_form.selectbox("Has 3G or not", (0, 1), format_func=func) touch_screen = my_form.selectbox("Has touch screen or not", (0, 1), format_func=func) wifi = my_form.selectbox("Has wifi or not", (0, 1), format_func=func) submit = my_form.form_submit_button(label="make prediction")The above block of code contains all the fields to fill in the mobile details and a simple button to submit the details and then make a prediction.Load logged Model in MLflow and ScalerThen you need to load both the logged model in MLflow model for predictions and the scaler for input transformation. The load() method from the joblib package will perform the task.# load the mlflow registered model and scaler mlflow_model_path = "mlruns/1/17ccd85b4c7e491bbdbcba58b5eafae1/artifacts/model/model.pkl" with open( join(dirname(realpath(__file__)), mlflow_model_path), "rb", ) as f: model = joblib.load(f) scaler_path = "model/mobile_price_scaler.pkl" with open(join(dirname(realpath(__file__)), scaler_path ), "rb") as f: scaler = joblib.load(f)Create Result DictionaryThe trained model will predict the output into numbers (0,1,2 or 3). For a better user experience, we can use the following dictionary to present the actual meaning.# result dictionary result_dict = { 0: "Low Cost", 1: "Medium Cost", 2: "High Cost", 3: "Very High Cost", }Make Predictions and Show ResultsOur last block of code is to make predictions and show results whenever a user adds mobile details and clicks the “make prediction” button on the form section.After clicking the button, the web app will perform the following tasks:Collect all the inputs (mobile details).Create a dataframe for the inputs.Transform the input using the Scaler.Perform prediction on the transformed inputs.Display the results of the mobile price according to the result dictionary (result_dict).if submit: # collect inputs input = { 'battery_power': battery_power, 'blue': blue, 'clock_speed': clock_speed, 'dual_sim': dual_sim, 'fc': fc, 'four_g': four_g, 'int_memory': int_memory, 'm_dep': m_dep, 'mobile_wt': mobile_wt, 'n_cores': n_cores, 'pc': pc, 'px_height': px_height, 'px_width': px_width, 'ram': ram, 'sc_h': sc_h, 'sc_w': sc_w, 'talk_time': talk_time, 'three_g': three_g, 'touch_screen': touch_screen, 'wifi': wifi, } # create a dataframe data = pd.DataFrame(input, index=[0]) # transform input data_scaled = scaler.transform(data) # perform prediction prediction = model.predict(data_scaled) output = int(prediction[0]) # Display results of the Mobile price prediction st.header("Results") st.write(" Price range is {} ".format(result_dict[output]))Test the Data Science Web AppWe have successfully created a simple web app to deploy the logged model in MLflow and predict the price range.To run the web app, you need to use the following command in your terminal.streamlit run app.pyThe web app will then appear instantly in your web browser, or you can access it using the local URL http://localhost:8501.You need to fill in the mobile details and then click the make prediction button to see the prediction result.After filling in the mobile details and clicking the make prediction button, the machine learning model predicts that the price range is Very High Cost.Deploy Streamlit Web App in the Streamlit CloudThe final step is to make sure the streamlit app is available to anyone who wants to access it and use our machine learning model to predict the mobile price range.Streamlit cloud allows you to deploy your streamlit web app for free on the cloud. You just need to follow the steps below:Create a new GitHub Repository on GitHubAdd your streamlit web app (app.py), model folder and requirements.txt.Create your account on a streamlit cloud platform Create a new app and then link your GitHub repository that you created by typing the name of the repository.Change the streamlit app file name from streamlit_app.py to app.pyFinally, click the Deploy button.After the streamlit cloud finished installing the streamlit app and all of its prerequisites, your application will finally be live and accessible to anyone with a link provided by streamlit cloud.link: https://davisy--mobile-price-predecition-streamlit-app-app-7clkzd.streamlit.app/ConclusionYou have gained expertise in data and model tracking with data version control (DVC), as well as tracking machine learning experiments with MLflow and DagsHub. You can share the results of your machine learning experiments with the world, both successful and failed. You have also gained powerful tools that will assist you in efficiently organizing your machine learning project.In this tutorial, you have learned:How to create your first Daghubs repository.How to track your data using data version control (DVC) and connect to the Dagshub DVC remote.How to automatically track your machine learning experiments using auto-logger classes from MLflow.How to connect MLflow to a remote tracking server in the DagsHub.How to create a Data science web app for your machine learning model using Streamlit.You can download the source code used in this article here: https://dagshub.com/Davisy/Mobile-Price-ML-Classification-ProjectIf you learned something new or enjoyed reading this tutorial, please share it so that others can see it. Until then, I'll see you in the next article!You can also find me on Twitter at @Davis_McDavid.

Read More...
posted about 12 hours ago on hacker noon
File uploads are a necessary thing for many web applications. They allow users to share their files, photos, and videos on your site with others. While this can be a great feature, it also opens up the potential for security vulnerabilities. For example, malicious users could use a lack of file size or file name length restriction. They would bombard the server with either a massive amount of files or files with enormous sizes. Once this happens, the server’s resources might be unable to handle the load, disrupting your application. Malicious users could also disguise viruses and malware as seemingly harmless files. Once uploaded and not checked thoroughly, they could mess with the server’s files and directories. They could also change the application, phishing for information, and other dangerous activities. Hence, file uploads should be treated with caution and care.When you upload a file to a website, the file is typically stored on a server. The server will then assign the file a unique identifier and store it in a database. It may also compress the file to save space.When you try to access the file, the server will check the database for the file's identifier and retrieve the file from storage. Then, an error message will be displayed if the file is not found. Uploading files can be done in several ways, including via an HTML form, JavaScript, or FTP (File Transfer Protocol). The best way to upload a file depends on your specific needs.Why are file uploads an essential part of any web application?The file may be displayed inline on the web page or as a thumbnail if the file is an image. For example, if the file is a video, it may be streamed from the server or downloaded for playback offline.File uploads are essential to any web application because they allow users to upload and share files with others. Many file upload scripts and libraries are available, but which are the best? In the next section, we’ll discuss the seven best file upload solutions you can integrate with your application. 7 Best JavaScript APIs for file uploads Choosing among the available file upload APIs can take time because they all perform the same functions. But look closely at their features, and you’ll find the API that will help you safely, reliably, and quickly allow users to upload files. 1. FilestackFilestack is a cloud-based solution that easily handles file uploads in your web applications. It offers advanced features such as image previews, OCR, virus scanning, and content delivery API integration. It also has a 99.999% upload success rate and helps users in areas with poor network conditions upload files. 2. DropzoneJSDropzoneJS is a free, open-source library that provides drag-and-drop file uploads with image previews. It is easy to use and has various features, including support for multiple file uploads, progress bars, image previews, and drag and drop. 3. UppyUppy is a JavaScript file upload library that provides a flexible, fully customizable UI powered by React or Vue. It has a clean interface and easy-to-use API that makes it perfect for handling file uploads in your web application. 4. FilePondFilePond is a fast, lightweight, and flexible JavaScript file upload library that allows you to add files to your web application using drag and drop or selecting them from your device's file system. It has various features, including image previews, progress bars, drag-and-drop support, and multiple file uploading. 5. React-DropzoneReact-Dropzone is a simple component for handling drag-and-drop file upload in your React application. It is easy to use and has an intuitive API that makes it perfect for taking file uploads in your web app. 6. Angular-File-UploadAngular-File-Upload is an AngularJS directive that allows you to handle file uploads in your AngularJS applications. It is simple and has various other features for file upload use cases. 7. Simple-UploaderSimple-Uploader is a lightweight JavaScript library that allows you to handle file uploads in your web applications with ease. It has a simple API that makes it easy to use and includes features such as progress bars and image previews. ConclusionThese JavaScript libraries are great options for handling file uploads in your web application, regardless of your experience level or project size. However, if you are looking for a complete solution with advanced features such as OCR, CDN, or virus scanning, Filestack may be the best choice.

Read More...
posted about 12 hours ago on hacker noon
Cybersecurity is a top concern for most businesses and consumers. The threat landscape is evolving and expanding, making more businesses susceptible to cybersecurity incidents. One of the main goals of any company’s cybersecurity program is to prevent incidents from happening in the first place. However, there’s no silver bullet that companies can use to prevent attacks. \ One technique that can help an organization improve its cybersecurity posture is root cause analysis. Continue reading to learn about root cause analysis and why it’s becoming an increasingly popular cybersecurity technique. What Is Root Cause Analysis? A root cause analysis (RCA) is a cybersecurity method teams use to get to the heart of a data breach or cybersecurity incident. When a cyberattack occurs, the SecOps team must come together and – as its name suggests – find the “root cause” of the problem by conducting an analysis. \ Breaches and attacks happen in a variety of ways. For example, attacks can fall under a few categories like malware, phishing, and insider misuse. As a result, every cyber incident has a unique, singular cause or multiple causes. Not every incident will have the same cause, which is why IT professionals use the RCA method. \ Security problems sometimes stem from multiple root causes. A root cause investigation typically uncovers a range of problems lurking beneath the surface. By identifying them through root cause analysis, one can decrease the likelihood of a repeat attack happening in the future. Benefits of Root Cause Analysis What are some primary benefits of root cause analysis? Explore some examples below: \ Reduces errors coming from the same root cause Puts tools and solutions in place to prevent or address future issues Enables teams to resolve incidents more efficiently Implements tools to log and monitor for potential issues \ The goal for IT teams is to learn as much as possible about the incident so they can remove the threat from their systems. Organizations can analyze each link in the chain of events that led up to the incident. \ There are several instances where performing a root cause analysis is helpful, such as when problems are first identified or a quick fix is necessary. The 3 Types of Root Causes A root cause can fall into one of three categories: Physical, human, or organizational. Learn more about each type below. Physical If a physical piece of hardware breaks down or fails, it could cause a potential security problem for IT staff. Cybercriminals will use any means to gain access to a corporate network, and going after broken hardware is no exception. Human Perhaps unsurprisingly, 81% of hacking-related data breaches had a root cause of weak or stolen passwords from employees. Human employees are the first defense against external cybersecurity threats, which is why training is so important. The average employee might not know enough about cybersecurity to practice good cyber hygiene, opening companies up to more cyber risks. Organizational Root causes under the organizational category occur when company leaders make administrative mistakes. For example, if a marketing team fails to update its content management software (CMS), it could leave them vulnerable to a cyber incident. Understanding 3 Root Cause Analysis Methods Organizations can choose from three root cause analysis methods – mapping, the “5 Whys,” and Fishbone – for security incident response. Learn more about these three methods below. Mapping After an incident occurs, teams can use the root cause analysis mapping method, which involves creating a detailed cause map. The map creates a visualization of data to help leaders respond to the incident appropriately. It should answer three essential questions: \ What type of incident happened Why the incident happened What actions to take to prevent the same incidents in the future \ The map should connect all individual cause-and-effect relationships so it eventually reveals the root cause of the incident. The “5 Whys” The “5 Whys” root cause analysis approach is another way to determine an incident’s root cause. The only thing a company needs to do with this approach is to ask the question “Why?” five times consecutively. By asking the question, finding an answer, and questioning “Why?” again, IT teams can reach the heart of the issue. \ While using this approach, continue asking why and other questions like when, what, and how. Keep in mind that some root causes are a symptom of another root cause, so you might have to ask why more than five times! Fishbone The Fishbone root cause analysis, also known as the Ishikawa diagram, is the third method one can use to identify root causes. As mentioned before, an incident can occur due to a larger problem. The Ishikawa diagram is helpful in determining the symptoms of a problem versus the root cause. \ Originally, the Ishikawa diagram was used to monitor quality-control issues in the shipbuilding industry. Now, the diagram is widely used by companies in a variety of industries, such as cybersecurity, marketing, and finance. 6 Essential Steps to Conduct a Root Cause Analysis Employees with knowledge of the subject matter, cybersecurity expertise, or a direct connection to the incident should be involved in all root cause analyses. No matter which method a company uses, IT and SecOps must work together to find the root cause of a cybersecurity incident to boost their defenses and mitigate future risks. \ Here are six steps companies should follow to conduct an effective root cause analysis. 1. Define Event Once an action or incident response team forms, the next step is to define the event. Was it a data breach? Was it a social-engineering attack? Define the specific details of the incident. 2. Identify Potential Causes The second step is to identify any potential causes of the issue. It might help if the security team organizes potential causes by categorizing them as physical, human, or organizational. 3. Finding the Root Cause After time spent deliberating, use the process of elimination to determine the root cause of the cyber incident. Did an employee use a weak password? Was someone using an outdated software solution? Now is the time to decide the method of attack used, the suspected party, and any impacted customers, clients, and employees. 4. Find a Solution The main purpose of an incident response plan is to find a solution to the problem. One reason why root cause analyses work so well is because, once the root cause is identified, it’s much easier for cybersecurity professionals to rectify the issue. 5. Implement Solution After coming up with a feasible solution to the attack, implement it. Let all parties involved know about what’s happened, and always be transparent about attacks. If customer data was hacked, it’s critical they’re made aware of the attack so they can take prompt action. 6. Monitor Once the solution is implemented, the IT and SecOps teams should monitor its effectiveness. No organization wants to follow these steps and conduct a root cause analysis unless the issues can be avoided in the future. The monitoring step is just as important as the other steps in a root cause analysis approach. Using RCA in the Cybersecurity Industry In the general cybersecurity industry, it’s important to gather data and glean insights before making any decisions. RCA provides the information an incident response team needs in order to recover from an attack. Companies should refer to the tips outlined above when handling cybersecurity attacks to prevent future breaches. \

Read More...
posted about 12 hours ago on hacker noon
\ Read/write splitting is a technique to route reads and writes to multiple database servers, allowing you to perform query-based load balancing. Implementing this at the application level is hard because it couples code or configuration parameters to the underlying database topology. For example, you might have to define different connection pools for each server in the database cluster. \ MariaDB MaxScale is an advanced database proxy that can be used as a read/write splitter that routes SELECT statements to replica nodes and INSERT/UPDATE/DELETE statements to primary nodes. This happens automatically without having to change your application code or even configuration—with MaxScale, the database looks like a single-node database to your application. \ In this hands-on tutorial, you’ll learn how to configure MariaDB database replication with one primary and two replica nodes, as well as how to set up MaxScale to hide the complexity of the underlying topology. The best part: you’ll learn all this without leaving your web browser! The Play With Docker Website Play With Docker (PWD) is a website that allows you to create virtual machines with Docker preinstalled and interact with them directly in your browser. Log in and start a new session. \ \ You will use a total of 5 nodes: node1: Primary server node2: Replica server A node3: Replica server B node4: MaxScale database proxy node5: Test machine (equivalent to a web server, for example) \ Note: Even though databases on Docker containers are a good fit for the most simple scenarios and for development environments, it might not be the best option for production environments. MariaDB Corporation does not currently offer support for Docker deployments in production environments. For production environments, it is recommended to use MariaDB Enterprise (on the cloud or on-premise) or MariaDB SkySQL (currently available on AWS and GCP). Running the Primary Server Add a new instance using the corresponding button: \ \ On node1, run a MariaDB primary server as follows: \ docker run --name mariadb-primary \ -d \ --net=host \ -e MARIADB_ROOT_PASSWORD=password \ -e MARIADB_DATABASE=demo \ -e MARIADB_USER=user \ -e MARIADB_PASSWORD=password \ -e MARIADB_REPLICATION_MODE=master \ -e MARIADB_REPLICATION_USER=replication_user \ -e MARIADB_REPLICATION_PASSWORD=password \   bitnami/mariadb:latest \ This configures a container running MariaDB Community Server with a database user for replication (replication_user). Replicas will use this user to connect to the primary. Running the Replica Servers Create two new instances (node2 and node3) and run the following command on both of them: \ docker run --name mariadb-replica \ -d \ --net=host \ -e MARIADB_MASTER_ROOT_PASSWORD=password \ -e MARIADB_REPLICATION_MODE=slave \ -e MARIADB_REPLICATION_USER=replication_user \ -e MARIADB_REPLICATION_PASSWORD=password \ -e MARIADB_MASTER_HOST= \   bitnami/mariadb:latest \ Replace with the IP address of node1. You can find the IP address in the instances list. \ Now you have a cluster formed by one primary node and two replicas. All the writes you perform on the primary node (node1) are automatically replicated to all replica nodes (node1 and node2). Running MaxScale MaxScale is a database proxy that understands SQL. This allows it to route write operations to the master node and read operations to the replicas in a load-balanced fashion. Your application can connect to MaxScale using a single endpoint as if it was a one-node database. Create a new instance (node4) and run MaxScale as follows: \ docker run --name maxscale \ -d \ --publish 4000:4000 \   mariadb/maxscale:latest \ You can configure MaxScale through config files, but in this tutorial, we’ll use the command line to make sure you understand each step. In less ephemeral environments you should use config files, especially in orchestrated deployments such as Docker Swarm and Kubernetes. Launch a new shell in node4: \ docker exec -it maxscale bash \ You need to create server objects in MaxScale. These are the MariaDB databases to which MaxScale routes reads and writes. Replace , , and with the IP addresses of the corresponding nodes (node1, node2, and node3) and execute the following: \ maxctrl create server node1 maxctrl create server node2 maxctrl create server node3 \ Next, you need to create a MaxScale monitor to check the state of the cluster. Run the following command: \ maxctrl create monitor mdb_monitor mariadbmon \ --monitor-user root --monitor-password 'password' \     --servers node1 node2 node3 \ Note: Don’t use the root user in production environments! It’s okay in this ephemeral lab environment, but in other cases create a new database user for MaxScale and give it the appropriate grants. \ Now that MaxScale is monitoring the servers and making this information available to other modules, you can create a MaxScale service. In this case, the service uses a MaxScale router to make reads and writes go to the correct type of server in the cluster (primary or replica). Run the following to create a new service: \ maxctrl create service query_router_service readwritesplit \ user=root \ password=password \     --servers node1 node2 node3 \ Finally, you need to create a MaxScale listener. This kind of object defines a port that MaxScale uses to receive requests. You have to associate the listener with the router. Run the following to create a new listener: \ maxctrl create listener \ query_router_service query_router_listener 4000 \ --protocol=MariaDBClient \ Notice how the listener is configured to use port 4000. This is the same port you published when you run the Docker container. Check that the servers are up and running: \ maxctrl list servers \ You should see something like the following: \ \ Testing the Setup To test the cluster, create a new instance (node5) and start an Ubuntu container: \ docker run --name ubuntu -itd ubuntu \ This container is equivalent to, for example, a machine that hosts a web application that connects to the database. Run a new Bash session in the machine: \ docker exec -it ubuntu bash \ Update the package catalog: \ apt update \ Install the MariaDB SQL client so you can run SQL code: \ apt install mariadb-client -y \ Connect to the database, or more precisely, to the MaxScale database proxy: \ mariadb -h 192.168.0.15 --port 4000 -u user -p \ As you can see, it’s as if MaxScale was a single database. Create the following table: MariaDB SQL \ CREATE TABLE demo.message(content TEXT); \ We want to insert rows that contain the unique server ID of the MariaDB instance that actually performs the insert operation. Here’s how: MariaDB SQL \ INSERT INTO demo.message VALUES \ (CONCAT("Write from server ", @@server_id)), \ (CONCAT("Write from server ", @@server_id)), \ (CONCAT("Write from server ", @@server_id)); \ Now let’s see which MariaDB server performed the write and read operations: MariaDB SQL \ SELECT *, CONCAT("Read from server ", @@server_id) FROM demo.message; \ Run the previous query several times. You should get a result like this: \ \ In my cluster, all the writes were performed by server ID 367 which is the primary node. Reads were executed by server IDs 908 and 308 which are the replica nodes. You can confirm the ID values by running the following on the primary and replica nodes: \ docker exec -it mariadb-primary mariadb -u root -p \ --execute="SELECT @@server_id" docker exec -it mariadb-replica mariadb -u root -p \ --execute="SELECT @@server_id" What’s Next? We focused on basic read/write splitting in this tutorial, but MaxScale can do much more than this. For example, enforce security to your backend database topology, perform automated failover, perform connection-based load balancing, import and export data from and into Kafka, and even convert NoSQL/MongoDB API commands to SQL. MaxScale also includes a REST API and web-based GUI for operations. Check the documentation to learn more about MaxScale. Also Published Here

Read More...
posted about 13 hours ago on hacker noon
\ The common vulnerability scoring system (CVSS) is a way to assign scores to vulnerability on the basis of their principal characteristics. This score indicates the severity of a vulnerability and on that basis, it can be categorized into low, medium, high, and critical severity which can be used by the organization to prioritize the vulnerabilities present in the system. \ CVSS has two versions of the scoring system CVSS2 and CVSS3, the cvss2 was released in the year 2007 and had a scoring range of 0 - 10 with three severity levels low, medium, and high whereas cvss3 is launched in the year 2015 having a scoring range of 0 - 10 with 5 severity levels none, low, medium, high, and critical. The Base, Temporal, and Environmental metric groups all remained the same, although there were some changes within the Base and Environmental groups to find out the accurate scores of the vulnerability. \ How CVSS works \ The CVSS score ranges from 0.0 to 10.0, where 1.0 is considered as least severe and 10.0 is the most severe. Mapping of CVSS score with qualitative ratings: \ | Base Score range | Severity | |----|----| | 0.0 | None | | 0.1 – 3.9 | Low | | 4.0 – 6.9 | Medium | | 7.0 – 8.9 | High | | 9.0 – 10.0 | Critical | \ \ CVSS Score Metrics \ A CVSS score is derived from three sets of metrics Base, Terminal, and Environmental. These three metrics cover the different characteristics of a vulnerability, its impact, and environmental tolerance over time. \ Base Metrics \ The base metrics represent the base score ranging from 0 - 10 and the inherent characteristics of a vulnerability that is, these characteristics don’t change over time. It is made up of two sets of Metrics: \ Exploitability Metrics: \ Attack vector Attack complexity Privileges required User interaction Scope \ Impact Metrics: \ Confidentiality impact Integrity Impact Availability Impact \ Temporal Metrics \ The temporal Metrics represent the characteristics of a vulnerability that change over time. Additionally, it contains the Report Confidence metric, which measures the degree of assurance in the existence of the vulnerability. It consists of three metrics groups: \ Exploit code maturity Remediation Level Report Confidence \ Environmental Metrics \ The environmental metrics represent the characteristics of a vulnerability that are relevant and have an impact on a particular user’s environment. Environmental metrics categories include: \ Collateral damage potential Confidentiality requirement Integrity requirement Availability requirement \ For example, consider a vulnerability having a CVSS score of 6.5 and having a vector: AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H/E:U/RL:O/RC:R/CR:H/IR:H/AR:L/MAV: X/MAC:X/MPR:X/MUI:X/MS:X/MC:X/MI: X/MA:X \ The above vector value indicates: \ AV: L - (Attack Vector) It means that the vulnerability is exploitable by local access. AC: L - (Attack complexity) This vector value indicates that a specialized access condition does not exist. PR: L - (Privileges required) It indicates that the attacker is authorized with privileges that provide basic user capabilities. UI: N - (User interaction) The vulnerable system can be exploited without interaction from any user. S: U - (Scope) An exploited vulnerability can only affect resources managed by the same authority. C: H - (Confidentiality impact) There is a total loss of confidentiality, resulting in all resources within the impacted component being divulged to the attacker. I: H - (Integrity impact) There is a total loss of integrity. A: H - (Availability impact) Loss of availability means the attacker is able to fully deny access to resources in the impacted component. E: U - (Exploit code maturity) No exploit code is available, or an exploit is entirely theoretical. RL: O - (Remediation Level) A complete vendor solution is available. Either the vendor has issued an official patch, or an upgrade is available. RC: R - (Report Confidence) Reasonable confidence exists, however, that the big is reproducible and at least one impact is able to be verified. CR: H - (Confidentiality requirement) Loss of confidentiality is likely to have a catastrophic adverse effect on the organization. IR: H - (Integrity requirement) Loss of integrity is likely to have a catastrophic adverse effect on the organization. AR: L - (Availability requirement) Loss of availability is likely to have a limited adverse effect on the organization. \ :::info Also published here. ::: \

Read More...
posted about 13 hours ago on hacker noon
DeFi (Decentralized Finance) has exploded in the crypto space over the last couple of years, and in 2021 alone, the industry was valued at just under 12 billion dollars. We caught up with Ox Bid, the CEO of Bolide.fi, to find out her view of the industry development over the next few years. What is DeFi? Decentralized Finance (DeFi) - is an emerging self-custody financial technology that allows individuals to manage their finances independently of global institutional authorities. \ Whilst using banks people have to go through a lot of different bureaucratic processes, pay multiple fees and wait for a transaction to go through, DeFi eliminates all of these borders. People can hold money in their digital wallets safely and make transactions in just a couple of minutes. All you need to use DeFi is the internet. How does DeFi work? DeFi uses blockchain technology. The information about transactions made by users is saved in blocks and verified. Once the transaction is verified, the block is encrypted. Then this block is linked to another block with all the information from the previous one. These blocks are chained together, forming a segment of a blockchain ecosystem. This ecosystem is essentially a substitution of all the intermediaries involved in financial transactions in TradFi. What challenges do you see for DeFi in the short term? Since DeFi is a new financial system, it is not yet well regulated, and currently, laws are grounded on the notion of separate financial jurisdictions. The borderless transaction ability of DeFi is probably the most challenging here. However, DeFi is constantly evolving, so once regulations across different jurisdictions are set, the possibilities for further growth are limitless. Another big challenge is the customer entrance level. Despite the fact that there is already a large number of different protocols, the market itself is still very difficult to understand for an average user, say online banks. How can someone earn passive income through DeFi? The potential to earn passive income through DeFi is endless, and it is a truly exciting opportunity for anyone, everywhere. Yield aggregators are one of the best options out there, simplifying the whole investment process and giving investors solid double-digit APYs. \ The biggest advantage that the decentralized market gives us is the ability to fully own and earn on your own assets without the middleman. There are different types of DeFi protocols, ranging from landing protocols, where you can leave your assets as collateral and earn on the percentage of providing liquidity to other market participants, ending with farming, where you provide liquidity to trading pairs, namely farming pairs, and receive part of the trading commission - farm tokens on decentralized exchanges. \ There are also other types of financial instruments, such as automated yield platforms, which make customers' lives easy and give stable and high rewards. What are the main features of Yield aggregators? How do they work? Yield aggregators are based on automated complex strategies. Most of such protocols optimize parts of the processes in farming by multiplying yield positions, automatically selling rewards, and re-adding them to the main liquidity pool. All of these actions help customers to save their time from constant market monitoring or a lot of different transactions, optimizing gas fees and increasing profits. I would say that this is probably the best way to manage your assets in the new financial world. Can you tell us more about your company, Bolide? What has been behind your impressive growth and popularity? Two years ago, together with my team, we got inspired by the idea of managing assets within the DeFi space and earning income solely from providing liquidity. Putting all of our efforts, knowledge, and ingenuity together, we created Bolide - a self-custody DeFi yield protocol, built on smart contracts. Bolide allows managing your crypto assets such as BTC, ETH, USDT, DAI, BUSD, USDC safely and easily. We also recently launched the Altcoin strategy which includes XRP, XVS, LTC, ADA, LINK, DOT, MATIC tokens. \ As an algorithm-based protocol, Bolide applies automated strategies to work across the DeFi market and get the highest possible returns on crypto assets. All yield generated by our strategies pays back in native utility token #BLID, which you can easily claim any time and move under staking or exchange on DEXes, LBANK to any other token. Our average APY level is about 8% for most single crypto assets, the staking is around 20%. \ We are working a lot on liquidation and impermanent loss risk control by constantly monitoring the token price and the liquidity level across the market and especially in protocols, which our strategy is leveraging. \ We recently developed a cross-chain strategy which is at the moment under audit. This strategy will allow users to bring their assets to Bolide from Polygon, Ethereum and Avalanche blockchains without using bridges. \ At Bolide we have a lot of plans and we really care about customers and the market itself. We are building relationships with all of the key market players and constantly supporting our community by doing different activities, giveaways and APY boosting.

Read More...
posted about 13 hours ago on hacker noon
In the fundamental analysis of crypto projects, several factors determine whether or not a coin or token will perform well after its launch. These factors include the whitepaper, founding team, project roadmap, tokenomics, etc. Among these factors, tokenomics sits at the top of the list in descending order of importance, yet very few people pay attention to it. \n Tokenomics is essential in evaluating a project’s long-term performance, and the word is derived from two English words - Token and Economics. In the remaining part of this article, I will try my best to help you understand how tokenomics gives us a glance into a project’s future and helps us play the long game of crypto. \ To understand the concept of tokenomics, let’s examine the words that form the word. TOKEN: A token is any cryptocurrency functioning on a blockchain that is intended to perform specific utility functions. Tokens are slightly distinct from crypto coins, but this article addresses both of them as “Tokens.” \ ECONOMICS: Wikipedia defines economics as a social science that studies the production, distribution, and consumption of goods and services. \ Putting both definitions together, we can extrapolate that tokenomics is concerned with a token's production, distribution, and consumption. Or simply put, tokenomics refer to the economics of a token. It is a well-thought-out plan that aims to influence how users interact with a coin or token. Does Tokenomics Exist in Traditional Finance? In the traditional setting where money flows within an economy, specific organizations oversee and control this movement of money within an economy. A project’s tokenomics is synonymous with the fiscal policies implemented by a central bank and other financial institutions to control cash flow. These fiscal policies encourage or discourage people from spending, lending, saving, cash flow, etc.  However, tokenomics distinguishes itself from government policies in the following ways. Implemented through codes: Tokenomics details are written in codes and uploaded on the blockchain, making them instant, highly effective, and easy to adopt. Transparency: Tokenomics transactions are easy to track, meaning you can follow money trails from the point of origin to its location at any point. Transaction details are disclosed to the general public, and anyone can access any record. Predictability: Since they are implemented through rigid codes, people can correctly use the code information to know what will happen to a token at a specific time. Immutability: Once tokenomics codes are written and uploaded to the blockchain, it is impossible to edit them, even if you are the creator. Bitcoin as a Case Study To further explain tokenomics, we’ll look at Bitcoin - blockchain’s most prominent cryptocurrency - as a case study. The design of Bitcoin’s tokenomics is such a masterpiece, and considering that it was created in the early days of the blockchain, it goes without saying that Satoshi Nakamoto, Bitcoin’s creator, is a genius. \ Bitcoin was created in 2008, and its total supply was programmed to be 21 million coins. However, not all of these coins were released into circulation. Instead, new Bitcoins are added to the blockchain every ten minutes to reward miners for mining a new block. But there’s more. \ This reward is halved after 210,000 blocks have been minted to slow the release of new Bitcoins into the blockchain. By estimate, it takes about four years to 210,000 blocks (which reiterates the predictability characteristics of tokenomics). The halving event has occurred thrice since Bitcoin was created. \ In 2008, the reward for miners was 50ETH. It was reduced to 25BTC in 2012, 12.5 BTC in 2016, and 6.25BTC in 2020. By estimate again, the next halving is scheduled to happen in April 2024, and all 21 million Bitcoins will be minted by 2140,\ | DATE | REWARD (ETH) | |----|----| | 2008 | 50 | | 2012 | 25 | | 2016 | 12.5 | | 2020 | 6.25 | | 2024 (expected) | 3.125 (expected) | \ Why is Halving Important? Reducing miners' rewards will discourage them from mining and eventually affect the blockchain. However, miners and everyone else in the Bitcoin ecosystem can enjoy the halving because; It creates scarcity. Scarcity leads to an increase in demand. An increase in demand causes a price increase. A price increase secures a cryptocurrency’s sustainability. Elements of Tokenomics From Bitcoin tokenomics, we can deduce a few elements that are core to taken development and should exist in the tokenomics of any project on the blockchain. Supply Supply in cryptocurrency exists in two forms. The total amount of a coin or token that is created and added to the blockchain - The maximum Supply The total amount of supply in circulation - Circulating Supply. Bitcoin, as you know, has a maximum supply of 21 million coins, but the circulating supply at the time of writing is about 19.2 million. There are also tokens like Ethereum, USDC, USDT, etc., that do not have a maximum supply. Tokens can be categorized into inflationary and deflationary tokens depending on their maximum supply. \ Utility Utility refers to the specific purpose(s) for which a coin or token was created. Bitcoin, Ether, and BNB were all created to exchange and store value on the Bitcoin, Ethereum, and BNB blockchains. Tokens can also serve other purposes like staking, lending, farming, voting, etc. \ Distribution To release a token or coin into the blockchain, it is distributed among interested holders. This distribution can occur in two ways; A pre-mining launch: Here, selected investors can buy the token before it is circulated. A fair launch: The general public can buy the token simultaneously without prior access. \ When studying a token’s distribution, you must check the percentage held by creators and investors. It is important because when these investors and creators are major stakeholders, it hints that they believe in the long-term success of a project. However, if a large portion of a token's supply is distributed among the public, that’s a Japan (A red flag.) \ Burning Cryptocurrencies are burned - not in an incinerator or campfire. Burning refers to the removal of cryptocurrencies from the blockchain, and it is used to reduce the supply in circulation and trigger price increases. It also helps to keep the blockchain up and running. \ For example, BNB adopts coin-burning to remove coins from circulation and reduce the total supply of its tokens. With 200 million BNB pre-mined, BNB’s total supply is 165,116,760 as of June 2022. BNB will burn more coins until 50% of the total supply is destroyed, which means BNB’s total supply will be reduced to 100 million BNB. Similarly, Ethereum started to burn ETH in 2021 to reduce its total supply. \ Incentives Have you ever heard that crypto rewards active participation? Yes. By using a blockchain consistently, users stand a chance to get incentives. Incentives encourage crypto enthusiasts to continue using the blockchain, ensuring the blockchain’s survival in the long term. \ In Bitcoin, miners get rewards every time a block is minted, encouraging more people to mine, and this mechanism is known as Proof of Work. In Proof of Stake, which Ethereum uses, tokens are locked and used to validate transactions. People who lock their funds are known as validators and receive rewards every time a block is minted. Final Message Since the creation of Bitcoin’s tokenomics, the concept of tokenomics has continued to evolve, gaining relevance in other use cases of the blockchain like DeFi, NFT, etc. All of the elements of tokenomics are intertwined and connected, and no one exists on its own. By studying a project’s tokenomics, you can predict how well it will perform in the short and long term and how much people will be interested in the tokens. As important as it is, tokenomics is only one of the many factors to consider when doing fundamental analysis. You must still consider other factors like the whitepaper, project founder(s), etc. \n

Read More...
posted about 14 hours ago on hacker noon
\ \ Social networking is exciting. Great apps make it easy for you to be heard and meet like-minded people Worldwide, in secure and privacy-respecting ways. Unfortunately, social apps today are way far that reach, but I do believe the paradigm is about to change. Since Twitter was recently acquired by the wealthiest person on Earth, a 'decentralized' social network called Mastodon became especially popular amongst those willing to replace Twitter by a more sustainable app. \ This article aims to describe why I strongly believe that \ Mastodon (as it is today) wont ever be a sustainable and popular (100 M+ daily active users) alternative to social media. Apps based on the groundbreaking Authenticated Transfer (AT) Protocol (e.g, Bluesky) might be the solution we need. \ Bluesky and the AT protocol The Authenticated Transfer Protocol (ATP) is an open-source (publicly available) software for large-scale distributed social applications, created by the company Bluesky, PBLLC, a fully independent company founded in late 2021. However, the bluesky project started in 2019 following Twitter’s co-founder and ex-CEO announcement that Twitter would be funding a small team to develop an open protocol for decentralized social media. \ The founders and owners of Bluesky, PBLLC are: Jack Dorsey, co-founder of Twitter (resigned from Twitter in late 2021). Jeremie Miller, inventor of XMPP. Jay Graber, its first CEO. \ \ \ The Bluesky community/team started by researching the state of art of decentralized social protocols, and concluded that none of them fully met the goals they had for a network that enables long-term public conversations at scale. \ The ATP is based on a hybrid federated architecture, because it borrows features from peer-to-peer networks. Some of its most important features are (ref): \ Portability - when people can switch app providers without losing their identity and social graph (posts, followers, following, account settings) \ With email, if you change your provider then your email address has to change too. This is a common problem for federated social protocols, including ActivityPub (the one that powers Mastodon). \ We want users to have an easy path to switching servers. \ \ Trust - Algorithms dictate what we see and who we can reach. We must have control over our algorithms if we're going to trust in our online spaces. The ATP includes an open algorithms mode so users are able to ajust their experience. \ Our premise is to work towards a transparent and verifiable system from the bottom up by giving users ways to audit the performance of services and the ability to switch if they are dissatisfied. \ \ Scale - Great social networking platforms seemlessly bring 100s of millions of people together in a global conversation, and that requires engineering for scale. \ \ \ :::tip Bluesky, PBLLC will soon launch its social app powered by the ATP. It will be called Bluesky Social and if you wish you can register for a waitlist to test the app in its private beta stage, i.e. before being launched to the public. ::: \ Mastodon (as it is) won't ever succeed \ Not user-friendly If it wasn't true you wouldn't find so many online tutorials on "how to use Mastodon" - see References B down the page, and note they range from 2017 to 2022 (Mastodon programmers had about 5 years to solve a simple problem and they couldn't). \ \ Lack of privacy Direct messages are not private (i.e. end-to-end encrypted) - server admins are watching you! \ \ Scalability \ Server downtime on Mastodon network isn’t a new issue. Raman’s research looked at downtime on Mastodon in 2019 and found servers had been inaccessible about 10% of the time. Even in Twitter’s early days, Raman says, it went offline only about 1.25% of the time. The nature of a volunteer-driven network means Mastodon can’t respond to crises like Big Tech companies do (ref). \ Mastodon has recently reached 2.6 M monthly active users and its servers are already buckling under the weight. That user base is a fraction of Twitter’s 200 M+ daily active users (ref). \ \ Account portability If a user moves to a new instance, they can redirect or migrate their old account. Redirection sets up a redirect notice on the old profile which tells users to follow the new account. Migration forces all followers to unfollow the old account and follow the new, if the software on their instance supports this functionality. Your posts will not be moved, due to technical limitations (ref). \ Mastodon’s model comes with its own risks. If the server you join disappears, you could lose everything, just like if your email provider shuts down (ref). \ \ Hard to build up a network Isnt it meant to be a social app?! 🙄 \ \ Uncatchy and unoriginal name Mastodon is also an animal that was extinct thousands of years ago (image source). \ Mastodon also happens to be the name of a popular heavy metal band. \ \ Imposed long usernames Mastodon usernames are made up of the account name you choose followed by the domain name of the server your account is on. This means there could be a user @[email protected] who is not the same one as @[email protected] \ \ Censorship The owner/administrator of the Mastodon server your account is hosted on, has ultimate control over everything you do: if for some reason the admin of kpop.social doesn’t like that you boosted a toot (how posts are called on Mastodon) from dolphin.town, she/he could remove it, delete your account, or even “defederate” that server (i.e block all dolphin.town posts on kpop.social) (ref). \ :::info Sources for all tweets can be found on the original article, below each image. ::: References A A Self-Authenticating Social Protocol Decentralized Social Networks — comparing federated and peer-to-peer protocols Sorry, Elon haters: Mastodon still can’t replace Twitter Six reasons Mastodon won't survive Mastodon is crumbling—and many blame its creator References B Looking for Twitter alternatives? Here’s how to use Mastodon How to use Mastodon, the Twitter alternative that’s becoming super popular A beginner’s guide to Mastodon, the hot new open-source Twitter clone How to Find Your Twitter Friends on Mastodon \ :::info Also published here. ::: \

Read More...
posted about 14 hours ago on hacker noon
In today's investing world, assets such as gold have been labelled safe-haven assets that can help investors mitigate the effects of inflation and currency depreciation. While gold has held this title for decades, technological advancements have propelled digital assets such as Bitcoin (BTC) into the spotlight. \ The current economic environment has caused gold to confront major headwinds not seen in approximately 30 years. It demonstrates that neither Bitcoin nor Gold is immune to economic inconsistencies; nonetheless, compared to fiat and other similar assets, the pair has outperformed significantly. Inherent Challenges of Gold as a Store of Value The status of gold as a store of value is universally acknowledged. Gold has various distinguishing characteristics that have fueled its global demand as a single commodity. The precious metal is durable, non-corrosive, and highly liquid, making it a truly unique medium of exchange. \ While the benefits of gold are frequently publicized, particularly by dealers, several drawbacks limit the asset's adoption. The first issue is counterfeiting. Because gold is so common, its fake versions can be manufactured, making it difficult to trace them. \ Even though testing models (though not completely reliable) have been developed to combat these counterfeits, investors must still contend with storage issues and the incidence of theft from criminals. These difficulties are not limited to gold; they are a major problem for all physical commodities. Integrating Blockchain Technology to Solve Gold’s Woes The beauty of innovation is that almost any problem can be solved. In the case of gold, the incorporation of blockchain technology has proven to be a very effective tool in combating counterfeiting, among other issues. \ With regard to gold, blockchain can be used in different ways, including tokenization and storing the asset's history on the immutable public ledger. These two models help simplify the storage challenges and combat counterfeit gold, as all assets can have a unique inscription through which they can verify their originality on a blockchain. \ The idea of improving gold through blockchain technology is already being implemented by some of the Web3 ecosystem's most innovative startups, including but not limited to Zambesi Gold (ZGD), Pax (PAXG), and Tether (XAUT). While each of these projects is distinct in its own right, they all follow the same model of pegging gold on a 1:1 basis with real physical gold. \ The current market price of gold determines the value of these and similar digital currencies. Each gold-backed cryptocurrency identifies a single token as being worth the equivalent of a certain number of grams or troy ounces of gold. Physical collateralized assets in the company's reserves, vault, or a trusted custodian serve as the equivalent.** \n An investor can profit from the rise in the gold price and the rise in the cryptocurrency market using such a tool. Gold provides relative price stability because it is less volatile than stocks. The crypto component provides higher price growth potential than traditional gold investments. \ It is also worth noting that blockchain allows for fractionalization, which lowers the entry barrier for those looking to invest in a new, hybrid type of asset. A gold-backed token, whose price is obviously lower than gold's, could also be a great way to diversify because an investor can put in a small amount of money, implying that price fluctuations are less likely to harm their wealth. \ Although this asset class is still relatively new to the market, the community has already witnessed some gold-backed tokens fail during a market downturn. \ So, at the end of this article, I'd like to give you a couple of tips that will help you decide whether it's worthwhile to try to invest in a gold-backed token or if you should look for another opportunity: \ Determine the role of physical gold in the company's business: is it just an investment solution or something they mine or use for production? Check if a company has a license to issue such assets. Check if a company has its own custodian trust, stores gold in an independent bank, or if the token is simply told to be pegged to the price of gold. Go to the company's website and look for a detailed description of tokenomics: the calculations and goals should be clear. Examine the token's features and utilities: compare average gold and crypto returns to projections for this token to see if they are realistic. \ You can invest with forethought if you find that the company has long-term goals and a robust business to back up its offering. Keep in mind that the token's fate will depend less on the market and more on the sound judgment of those responsible for issuing it. A product with a solid project behind it will be able to weather even the worst economic downturns.

Read More...
posted about 14 hours ago on hacker noon
The venture capital space has been somewhat shaky lately due to general economic uncertainty, rising inflation, interest rate spikes, fears of recession, and volatile economic situations. \ Some obvious evidence of this can be seen in overall VC deal value from 2021 to 2022. More precisely, the overall VC funding fell from $713B in 2021 to $230B by Q2 of 2022. Interestingly, deal volume had a contrary spike of 4,567 to 15,652 deals in the same period. \n Analysts reported reasons for VC investment reservations as a refocus on business fundamentals amid the global economic downturn and a focus on profitability rather than growth. \ Image source: Pitchbook Crypto VC Fund Directions - More Web3 deals, less NFT & DeFi deals Similar to other high growth industries, overall VC deals across the blockchain industry have also declined this year, dropping 71% from Q1 to Q2. But more interestingly, the most active blockchain VCs have slowly tilted their portfolio towards Web3 solutions as opposed to DeFi and NFT, which were the biggest sectors to attract capital in 2021. \ As displayed in the diagram above, there were quite a number of significant raises by both traditional and crypto VCs throughout 2022. Industry titans such as Sequoia Capital and Andreessen Horowitz (a16z) were particularly active, raising billions in funding to enhance the growth of Web3 ecosystems. Below is a highlight of the top crypto VC raises this year: \ Andreessen Horowitz (a16z) launches two billion-dollar crypto funds Andreessen Horowitz, also known as a16z, announced a $2.2 billion crypto fund in Q1 targeting infrastructure and Web3 projects. Later in Q2, this established American VC launched another $4.5 billion fund to support the ‘Golden Era of Web3’. \ Sequoia Capital actively expanded its crypto VC footprint Earlier in the year, Sequoia Capital raised $500 million to $600 million for a token fund to invest in popular DeFi protocols. Sequoia also featured prominently in Q2 with a $2 billion early-stage venture growth fund for the Indian market and $850 million Southeast Asia fund. \ FTX Ventures $2 billion fund Although now defunct, FTX exchange had launched a $2 billion crypto fund dubbed ‘FTX ventures’. The fund was designed to exist in parallel with FTX sister company Alameda Research, focusing on fintech, gaming and healthcare. \ Haun Ventures raises $1.5 billion Web3 fund Following Katie Haun’s departure from a16z, she started Haun ventures which managed to raise $1.5 billion during the first quarter of 2022. Haun ventures has been investing this capital towards Web3 startups and innovations that are building the next generation of the internet. \ World Innovation Lab (WiL) $1 billion fund features Web3 Innovations WiL is also jumping into the Web3 bandwagon, this U.S-Japan focused fund raised $1 billion in Q2 and noted that it would invest in the Web3 domain as well. Given their experience in B2B Saas investment rounds, the interest in funding Web3 projects comes as no surprise. \ Binance Labs $500 million investment fund Binance Labs, along with key partners DST Global and Breyer Capital, as well as other private equity, family offices, and limited partners, closed a $500 million investment fund in Q2. The fund spreads capital across different growth stages to promote the adoption of blockchain and Web3. \ Multicoin Capital debuts $430 million Venture Fund III Multicoin's $430 million Venture Fund III was one of the biggest raises in Q3. Venture Fund III targets to invest $500,000-$25 million in early-stage blockchain and Web3 startups, as well as companies in growth rounds that can benefit from $100 million or more in capital. \ Coinfund Ventures announces $300 million early stage Web3 Fund While the third quarter of 2022 saw a massive capital flight following the collapse of prominent crypto VCs like Three Arrows capital, CoinFund established a $300 million fund to support infrastructure development and Web3 companies that have a significant potential of market capitalization. Funding flow topped early in the year 2022 started with quite a bullish trajectory, VC interest was still very high as most digital assets had recorded all-time highs towards the end of 2021. The estimated funding invested across the cryptocurrency and blockchain space peaked at around $11 Billion in Q1 of 2022, a number that had been on a steady rise two years prior.  \ However, in the wake of Q2, things took a different turn as the Fed started to tighten economic policy. Add on top an escalating situation between Russia and Ukraine and all of a sudden the environment became vastly more difficult for investors to be risk tolerant in than the months before. \n The crypto industry was extraordinarily plagued in this regard, as major actors such as TerraLuna and their stablecoin UST went bust, setting off cascading effects in the industry. \ Overall, Q2 of 2022 saw drawback in VC funding across the space, with the figure slightly dropping below $10 billion. On the brighter side, the cascading streak of insolvencies from the Terra Luna fiasco came with its lessons and wisdom for VCs, shaping new investment patterns in Q3. \n Even with the market turning bearish, crypto VCs continued to invest in Q3 particularly in the under-explored sub-niches. Cointelegraph’s venture capital database report revealed an unusual shift in individual deals towards Web3 projects with strong infrastructural leaning; examples are GameFi and metaverse-related infrastructure projects. Contrary to the trends seen in 2021, mostly DeFi and NFT-related, over 44% of individual deals are Web3 based.  \n High Ticket Crypto Deals of 2022 Aptos raised $150 million in a Series A in July Launched by two former Meta developers, the Aptos project is a reincarnation of Meta’s 2019 Diem blockchain project. Aptos is an infrastructural web3 project with plans to build more efficient developer tools for Web3 solutions. The Aptos Series A round included investments from Andreessen Horowitz, Multicoin Capital, Circle Ventures, and others, which ended up at $150 Million. Limit Breakers, the creators of DigiDaigaku, raises $200Million Limit Breakers raised over $200 million in two funding rounds for its novel “free-to-own” gaming incentive model. The funding was led by Josh Buckley, the chair of Mino Games and investment firm Paradigm and Standard Crypto. Limits breakers Free to own model goes beyond virtual earnings to allow players to own their preferred gaming character, which cuts at the heart of gaming infrastructures over web3. Mytsten Lab raises $300 Million at a $2+ Billion valuation Mysten Lab is a web3 infrastructure project building their flagship project, the Sui blockchain protocol, which is targeted at optimizing web applications’ speed and cost possibilities over a blockchain. \ Some of the leading VCs who participated in this funding round include Binance Labs, Coinbase Ventures, Andreessen Horowitz, Circle Ventures, Lightspeed Venture Partners, Jump Crypto, Apollo, Franklin Templeton, Sino Global, and several others. Looking Into the Horizon It’s obvious that such a strong start of the venture capital sector for crypto this year that expectations were high for the remainder of the year. Needless to say, cheques got smaller and deals less frequent as the traditional economy started showing severe signs of instability. Q2 and Q3 in particular saw a sharp decline in funding volume across tech related sectors.  \n In hindsight, it seems obvious that the venture space (much like the rest of crypto) was a bit  inflated and now we’re slowly awaiting for the risk willingness to return to the space. Although it is hard to predict when exactly the overall crypto markets will take off again, it is evident that crypto VCs are keeping close tabs on the developments especially in Web3 and that the large amount of funding available should come as a good reassurance that continued innovation will take place in the space. \n \n \n \

Read More...
posted about 15 hours ago on hacker noon
How are you, hacker? 🪐What's happening in tech this week: The Noonification by HackerNoon has got you covered with fresh content from our top 5 stories of the day, every day at noon your local time! Set email preference here. ## How I Live Stream My Brain with Amazon IVS, a Muse Headband and React By @amazonivs [ 16 Min read ] This is the first time weve used React to broadcast live streaming our brain data Read More. FTX: The Greatest Crypto Magic Trick in the World 🪄 By @zamboglou [ 3 Min read ] Suppose it was discovered that SBF could maintain client deposits in non-segregated accounts and that he could offer any loan to Alameda,; what next? Read More. 🧑‍💻 What happened in your world this week?It's been said that writing can help consolidate technical knowledge, establish credibility, and contribute to emerging community standards. Feeling stuck? We got you covered ⬇️⬇️⬇️ ANSWER THESE GREATEST INTERVIEW QUESTIONS OF ALL TIME We hope you enjoy this worth of free reading material. Feel free to forward this email to a nerdy friend who'll love you for it.See you on Planet Internet! With love, The Hacker Noon Team ✌️

Read More...
posted about 15 hours ago on hacker noon
For decades, artists and musicians have gotten the short end of the stick, struggling against outdated gatekeeping models, unfair compensation, inefficient payment structures, and barriers inhibiting meaningful connections with their audiences. Flash forward to today, 2022: a world where NFTs provide alternatives for creators who want to connect directly with their fans and get fairly compensated for their work. The token economy, smart contracts, and the rising Web3 environment are transforming the art and music world for the benefit of creators everywhere. Let’s explore this new creative world. \ The Stifling of Artists and Musicians Art and music are spaces where individuals can express themselves and showcase unique creative talents. They’re also spaces where creators have been trapped within broken systems for a long time. Like the art industry, the music industry puts the control in the hands of a few gatekeepers who determine who gets to be seen and heard, when and how they can be accessed, and sometimes even what they can create. \ Record labels and streaming services have a crippling grip on their artists, controlling the release of new music and taking a massive cut on sales. A musician in the US only earns between $20,000 and $25,000 annually for their music, and 61% of musicians say they don’t earn enough from their music to meet basic living expenses. \ This is clear when looking at streaming service payout figures. In order to earn $1, a musician needs 143 streams of a song on Apple Music, 229 streams on Spotify, 250 on Amazon Music, and 305 on Soundcloud. Fine artists aren’t necessarily doing any better. Our recent report on “Making a Living as An Artist” found that only 49% of artists surveyed can earn a steady income through their art, citing “generating consistent and stable income” as the biggest challenge they face. \ Given this, many creators turn to digital channels to help them showcase and distribute their art — and it’s within these digital channels that technological innovations are creating a more equitable and accessible future for creators. \ \ NFTs and Web3’s Benefits for Creators What can NFTs do for artists and musicians? Plenty. And despite the market downturn, they’re already positively impacting creators in several ways. \ \ Access, Gatekeeping, and Patronage The biggest barrier between creators and their fans are the gatekeepers, those who have historically deemed what is worthy of showcasing via traditional channels. However, with the rise in technology over the past few decades, there’s more democratization of the arts overall as creatives use websites, digital platforms, and social media to get the word out about their work. NFTs, take that one step further: creators can now mint and monetize their work and transact directly with fans. \ Alongside online galleries and marketplaces for artists is the rise in NFT marketplaces for musicians. \ One of those is Sound.xyz, a Web3-based platform “driven by the relationship between listeners and artists,” recently raised $5 million in seed funding. Another platform is Royal, which not only lets fans directly support their favorite musicians but allows them to share in royalties as well. \ \ Royalties While musicians have been able to generate royalty payments from their work (though when and how much is another discussion), visual artists can’t because there are no existing US federal laws on the topic, and resale royalty laws vary from state to state — and are often not enforced. \ The first case of this to attract major media attention was in 1973, when artist Robert Rauschenberg sold a painting for $900, only to see the collector resell it for $85,000 at a Sotheby’s auction, with the artist having no right to resale earnings. That changes with NFTs, where smart contracts can include royalty payments for subsequent sales of the work so that when an artist’s work increases in value as their career progresses, they can be rightly compensated for it. \ Creative and Career Freedom Creating NFTs and interacting directly with an audience gives creators more freedom to create the art and music they want instead of having to cater to a brand, corporate patron, or label. There's an audience out there for everyone, and with more creative freedom, artists and musicians will be able to find theirs more easily. Not having to be beholden to sponsors and corporate patrons means that artists and musicians also have more control over their career trajectories. \n Web3 Benefits The advent of Web3 will be much better for musicians and artists in terms of earning potential than Web2. a16z’s recent research found that Web3 platforms that support NFT sales and direct fan interaction are paying substantially more to makers at an average rate of $174,000 per creator. This is compared to $0.10 per user on Meta, $636 per artist on Spotify, and $405 per channel on YouTube. \ Every day new platforms are being built within the Web3 space to better support creators and change the model so that an artist and their audiences are in touch with one another. Artists can drop new music anytime they want, and it goes directly to fans who can support their favorite artists by buying their new music. \ The Future of Artist Creation In the near future, will creators get paid what they're worth, do the work they want, and be fairly compensated for it? Yes, as the art and music worlds move towards the increased access, connections, and opportunities facilitated by Web3 innovations. The idea of more meaningful creative work — and creators actually earning a living by making it — is swiftly becoming a reality. \n

Read More...
posted about 15 hours ago on hacker noon
70 - Debugging on Mobile is Too Hard I write one article out of my comfort zone (Clean Code / Refactoring / Code Smells) every month. Hackernoon contests are a great source of inspiration. \ I founded a startup 10 years ago to develop Android and IOS mobile Apps, so it is not too far away from my knowledge. \ TL;DR: Some tips and tricks on how to debug a Mobile Application Why Debugging Mobile Software is Harder than Desktop / Web Software? The Problem Debugging mobile software can be a challenging task, but it is an essential part of the development process. \ It can be harder than debugging desktop or web software for several reasons. One of the main challenges is the wide variety of mobile devices and operating systems that are available. \ This can make it difficult to test and debug mobile software. \ We need to ensure that our software works on a wide range of devices with different hardware and software configurations. \ Another challenge with debugging mobile software is the limited resources available on mobile devices. \ Mobile devices are smaller and have less processing power, memory, and storage than desktop or laptop computers. \ We need complex debugging tools and processes on mobile devices, as they may not have the resources to support them. \ Mobile software often relies on network connections and other external resources, such as sensors and cameras. \ This can make it difficult to debug mobile software, as issues may be caused by external factors such as network connectivity or device hardware. \ With the right tools and strategies, it is possible to debug code on a mobile device. \ Here are some tips and best practices: Some Solutions One of the first steps in debugging mobile software is to identify the problem. \ We can do it by reproducing the issue and observing the behavior of the software. \ We can use a variety of tools and techniques to help them identify the root cause of the problem, such as logging, debugging tools, and testing frameworks. \ Once we identify the problem, we need to modify the code and run it again, use debugging tools to step through the code, or use testing frameworks to identify and fix bugs. Remote Debuggers Our first ally is a remote debugger. \ A debugger is a tool that allows us to pause the execution of our code, inspect variables and data, and step through the code line by line. \ This can help us to identify the source of the problem and come up with a solution. \ We should use a device with a powerful processor and plenty of memory. \ Debugging code can be resource-intensive, so it's important to use a mobile device that has a powerful processor and enough memory to handle the demands of debugging. \ We need to ensure that our device can keep up with the demands of the debugging process and minimize the chances of the device crashing or becoming unresponsive. \ We can also connect most mobile devices to a computer using a USB cable. \ Many mobile devices come with a built-in remote debugger that allows you to connect to the device from a computer and debug your code remotely. \ This can be a convenient way to debug code on a mobile device, especially if your device is not connected to a computer or if you need to debug your code on multiple devices at the same time. In addition to remote debuggers, there are also mobile-specific debuggers that are designed specifically for debugging code on mobile devices. \ These debuggers often come with a range of features and tools that are specifically tailored to the needs of mobile developers, such as the ability to simulate different device configurations or network conditions. \ Devices come in a wide variety of shapes, sizes, and configurations, so it's important to test your code on as many different devices and operating systems as possible. \ This will help you identify and fix any issues that may be specific to certain devices or operating systems, and ensure that your code works as intended on a wide range of devices. Emulators One way to overcome these limitations is to use a mobile device with a larger screen, such as a tablet. \ A tablet can provide more screen real estate, which can make it easier to see and interact with your code. \ Some tablets come with a keyboard and stylus, which can provide a more traditional input experience when working with code. Testing Frameworks We can also use a variety of testing frameworks to help them debug our mobile software. Testing frameworks allow us to create automated tests we can run against our software to identify and fix bugs. \ This can help to ensure that our software is of high quality and performs well on a variety of devices. Logging There are many logging that can be used to help debug code on a mobile device. For example, you can use log messages to print information about the state of your code. \ Some logging tools are: \ Crashlytics is a crash reporting tool that allows you to track and analyze crashes in your mobile app. \ With Crashlytics, you can see detailed information about crashes in your app, including the stack trace and the device and OS version where the crash occurred. \ This information can help you quickly identify and fix issues that are causing crashes in your app. Logcat is a logging tool that is part of the Android SDK and allows you to view and filter log messages generated by your app and the Android system. \ It is a powerful tool that can help you identify and debug issues with your app, such as crashes, performance issues, and incorrect behavior. \ If you are developing an iOS app, you can use the Xcode Debug Console to view and filter log messages generated by your app. \ The Debug Console provides a similar set of features to Logcat, and allows you to easily identify and debug issues with your iOS app. \ HockeyApp is a platform for distributing, testing, and collecting feedback on mobile apps. In addition to providing tools for distributing and testing your app, HockeyApp also includes an amazing logging feature. \ You can also build your own custom logging tool. Performance Testing You can use performance profiling tools to help identify and fix performance issues in your code. Some tools allow you to collect detailed information about the performance of your code, such as the amount of CPU and memory usage, the number of function calls, and the execution time of your code. \ This information can help you identify areas of your code that may be causing performance issues, and then take steps to optimize and improve the performance of your code. Mobile Webapps There are several ways to test mobile webapps on browsers, including the following: Most modern web browsers, such as Google Chrome and Mozilla Firefox, have a built-in mobile browser emulator that allows you to simulate how your webapp will look and function on a mobile device. \ To access the emulator, you can open the developer tools in your web browser and select the "Emulate" tab. \ From here, you can choose the type of mobile device you want to simulate and adjust the screen size, pixel density, and other settings to match the device you are testing on. \ You can test your mobile webapp using a physical mobile device, such as a smartphone or tablet. To do this, you can either open the webapp in the mobile device's web browser and test it directly, or you can use a tool like Appium to run your webapp in an emulator or simulator on the device. Another option is to use a cloud-based testing service, such as BrowserStack or Sauce Labs, which allows you to test your webapp on a variety of different mobile devices and browsers. \ These services provide a wide range of mobile devices that you can use to test your webapp, as well as tools for automating your tests and collecting detailed reports on the performance and functionality of your webapp. \ Regardless of which method you choose, it is important to test your mobile webapp on a variety of different mobile devices and browsers to ensure that it works properly and provides a good user experience on all devices. The Process Once you have the appropriate tools in place, the process of debugging code on mobile devices is similar to debugging code on a desktop or laptop computer. \ You can use techniques such as breakpoints, stepping through code, and inspecting variables to identify and fix bugs in your code. \ One important consideration when debugging code on mobile devices is the performance of your code. \ Mobile devices often have limited processing power and memory compared to desktop or laptop computers, which can affect the performance of your code. \ This can make it difficult to debug performance issues. \ The behavior of your code on a mobile device may be different than on a desktop or laptop computer. \ Debugging mobile software can be a challenging task, but it is an essential part of the development process. \ By using a combination of tools, techniques, and testing frameworks, we can identify and fix issues in our code, ensuring that our software is of high quality and performs well on a variety of devices.

Read More...
posted about 16 hours ago on hacker noon
Free as in Freedom, by Sam Williams, is part of the HackerNoon Books Series. You can jump to any chapter in this book here. OPEN SOURCEOPEN SOURCEIn November , 1995, Peter Salus, a member of the Free Software Foundation and author of the 1994 book, A Quarter Century of Unix , issued a call for papers to members of the GNU Project's "system-discuss" mailing list. Salus, the conference's scheduled chairman, wanted to tip off fellow hackers about the upcoming Conference on Freely Redistributable Software in Cambridge, Massachusetts. Slated for February, 1996 and sponsored by the Free Software Foundation, the event promised to be the first engineering conference solely dedicated to free software and, in a show of unity with other free software programmers, welcomed papers on "any aspect of GNU, Linux, NetBSD, 386BSD, FreeBSD, Perl, Tcl/tk, and other tools for which the code is accessible and redistributable." Salus wrote: Over the past 15 years, free and low-cost software has become ubiquitous. This conference will bring together implementers of several different types of freely redistributable software and publishers of such software (on various media). There will be tutorials and refereed papers, as well as keynotes by Linus Torvalds and Richard Stallman.See Peter Salus, "FYI-Conference on Freely Redistributable Software, 2/2, Cambridge" (1995) (archived by Terry Winograd).http://hci.stanford.edu/pcd-archives/pcd-fyi/1995/0078.html One of the first people to receive Salus' email wasconference committee member Eric S. Raymond. Althoughnot the leader of a project or company like the variousother members of the list, Raymond had built a tidyreputation within the hacker community as a majorcontributor to GNU Emacs and as editor of The NewHacker Dictionary, a book version of the hackingcommunity's decade-old Jargon File.For Raymond, the 1996 conference was a welcome event. Active in the GNU Project during the 1980s, Raymond had distanced himself from the project in 1992, citing, like many others before him, Stallman's "micro-management" style. "Richard kicked up a fuss about my making unauthorized modifications when I was cleaning up the Emacs LISP libraries," Raymond recalls. "It frustrated me so much that I decided I didn't want to work with him anymore."Despite the falling out, Raymond remained active in the free software community. So much so that when Salus suggested a conference pairing Stallman and Torvalds as keynote speakers, Raymond eagerly seconded the idea. With Stallman representing the older, wiser contingent of ITS/Unix hackers and Torvalds representing the younger, more energetic crop of Linux hackers, the pairing indicated a symbolic show of unity that could only be beneficial, especially to ambitious younger (i.e., below 40) hackers such as Raymond. "I sort of had a foot in both camps," Raymond says.By the time of the conference, the tension between those two camps had become palpable. Both groups had one thing in common, though: the conference was their first chance to meet the Finnish wunderkind in the flesh. Surprisingly, Torvalds proved himself to be a charming, affable speaker. Possessing only a slight Swedish accent, Torvalds surprised audience members with his quick, self-effacing wit.Although Linus Torvalds is Finnish, his mother tongue is Swedish. "The Rampantly Unofficial Linus FAQ" offers a brief explanation: Finland has a significant (about 6%) Swedish-speaking minority population. They call themselves "finlandssvensk" or "finlandssvenskar" and consider themselves Finns; many of their families have lived in Finland for centuries. Swedish is one of Finland's two official languages. http://tuxedo.org/~esr/faqs/linus/ Even more surprising, says Raymond, was Torvalds' equal willingness to take potshots at other prominent hackers, including the most prominent hacker of all, Richard Stallman. By the end of the conference, Torvalds' half-hacker, half-slacker manner was winning over older and younger conference-goers alike."It was a pivotal moment," recalls Raymond. "Before 1996, Richard was the only credible claimant to being the ideological leader of the entire culture. People who dissented didn't do so in public. The person who broke that taboo was Torvalds."The ultimate breach of taboo would come near the end of the show. During a discussion on the growing market dominance of Microsoft Windows or some similar topic, Torvalds admitted to being a fan of Microsoft's PowerPoint slideshow software program. From the perspective of old-line software purists, it was like a Mormon bragging in church about his fondness of whiskey. From the perspective of Torvalds and his growing band of followers, it was simply common sense. Why shun worthy proprietary software programs just to make a point? Being a hacker wasn't about suffering, it was about getting the job done."That was a pretty shocking thing to say," Raymond remembers. "Then again, he was able to do that, because by 1995 and 1996, he was rapidly acquiring clout."Stallman, for his part, doesn't remember any tension at the 1996 conference, but he does remember later feeling the sting of Torvalds' celebrated cheekiness. "There was a thing in the Linux documentation which says print out the GNU coding standards and then tear them up," says Stallman, recalling one example. "OK, so he disagrees with some of our conventions. That's fine, but he picked a singularly nasty way of saying so. He could have just said `Here's the way I think you should indent your code.' Fine. There should be no hostility there."For Raymond, the warm reception other hackers gave to Torvalds' comments merely confirmed his suspicions. The dividing line separating Linux developers from GNU/Linux developers was largely generational. Many Linux hackers, like Torvalds, had grown up in a world of proprietary software. Unless a program was clearly inferior, most saw little reason to rail against a program on licensing issues alone. Somewhere in the universe of free software systems lurked a program that hackers might someday turn into a free software alternative to PowerPoint. Until then, why begrudge Microsoft the initiative of developing the program and reserving the rights to it?As a former GNU Project member, Raymond sensed an added dynamic to the tension between Stallman and Torvalds. In the decade since launching the GNU Project, Stallman had built up a fearsome reputation as a programmer. He had also built up a reputation for intransigence both in terms of software design and people management. Shortly before the 1996 conference, the Free Software Foundation would experience a full-scale staff defection, blamed in large part on Stallman. Brian Youmans, a current FSF staffer hired by Salus in the wake of the resignations, recalls the scene: "At one point, Peter [Salus] was the only staff member working in the office."For Raymond, the defection merely confirmed a growing suspicion: recent delays such as the HURD and recent troubles such as the Lucid-Emacs schism reflected problems normally associated with software project management, not software code development. Shortly after the Freely Redistributable Software Conference, Raymond began working on his own pet software project, a popmail utility called " fetchmail." Taking a cue from Torvalds, Raymond issued his program with a tacked-on promise to update the source code as early and as often as possible. When users began sending in bug reports and feature suggestions, Raymond, at first anticipating a tangled mess, found the resulting software surprisingly sturdy. Analyzing the success of the Torvalds approach, Raymond issued a quick analysis: using the Internet as his "petri dish" and the harsh scrutiny of the hacker community as a form of natural selection, Torvalds had created an evolutionary model free of central planning.What's more, Raymond decided, Torvalds had found a way around Brooks' Law. First articulated by Fred P. Brooks, manager of IBM's OS/360 project and author of the 1975 book, The Mythical Man-Month , Brooks' Law held that adding developers to a project only resulted in further project delays. Believing as most hackers that software, like soup, benefits from a limited number of cooks, Raymond sensed something revolutionary at work. In inviting more and more cooks into the kitchen, Torvalds had actually found away to make the resulting software better.Brooks' Law is the shorthand summary of the following quote taken from Brooks' book: Since software construction is inherently a systems effort-an exercise in complex interrelationships-communication effort is great, and it quickly dominates the decrease in individual task time brought about by partitioning. Adding more men then lengthens, not shortens, the schedule. See Fred P. Brooks, The Mythical Man-Month (Addison Wesley Publishing, 1995)Raymond put his observations on paper. He crafted them into a speech, which he promptly delivered before a group of friends and neighbors in Chester County, Pennsylvania. Dubbed " The Cathedral and the Bazaar," the speech contrasted the management styles of the GNU Project with the management style of Torvalds and the kernel hackers. Raymond says the response was enthusiastic, but not nearly as enthusiastic as the one he received during the 1997 Linux Kongress, a gathering of Linux users in Germany the next spring."At the Kongress, they gave me a standing ovation at the end of the speech," Raymond recalls. "I took that as significant for two reasons. For one thing, it meant they were excited by what they were hearing. For another thing, it meant they were excited even after hearing the speech delivered through a language barrier."Eventually, Raymond would convert the speech into a paper, also titled "The Cathedral and the Bazaar." The paper drew its name from Raymond's central analogy. GNU programs were "cathedrals," impressive, centrally planned monuments to the hacker ethic, built to stand the test of time. Linux, on the other hand, was more like "a great babbling bazaar," a software program developed through the loose decentralizing dynamics of the Internet.Implicit within each analogy was a comparison of Stallman and Torvalds. Where Stallman served as the classic model of the cathedral architect-i.e., a programming "wizard" who could disappear for 18 months and return with something like the GNU C Compiler-Torvalds was more like a genial dinner-party host. In letting others lead the Linux design discussion and stepping in only when the entire table needed a referee, Torvalds had created a development model very much reflective of his own laid-back personality. From the Torvalds' perspective, the most important managerial task was not imposing control but keeping the ideas flowing.Summarized Raymond, "I think Linus's cleverest and most consequential hack was not the construction of the Linux kernel itself, but rather his invention of the Linux development model."See Eric Raymond, "The Cathredral and the Bazaar" (1997).In summarizing the secrets of Torvalds' managerial success, Raymond himself had pulled off a coup. One of the audience members at the Linux Kongress was Tim O'Reilly, publisher of O'Reilly & Associates, a company specializing in software manuals and software-related books (and the publisher of this book). After hearing Raymond's Kongress speech, O'Reilly promptly invited Raymond to deliver it again at the company's inaugural Perl Conference later that year in Monterey, California.Although the conference was supposed to focus on Perl, a scripting language created by Unix hacker Larry Wall, O'Reilly assured Raymond that the conference would address other free software technologies. Given the growing commercial interest in Linux and Apache, a popular free software web server, O'Reilly hoped to use the event to publicize the role of free software in creating the entire infrastructure of the Internet. From web-friendly languages such as Perl and Python to back-room programs such as BIND (the Berkeley Internet Naming Daemon), a software tool that lets users replace arcane IP numbers with the easy-to-remember domain-name addresses (e.g., amazon.com), and sendmail, the most popular mail program on the Internet, free software had become an emergent phenomenon. Like a colony of ants creating a beautiful nest one grain of sand at a time, the only thing missing was the communal self-awareness. O'Reilly saw Raymond's speech as a good way to inspire that self-awareness, to drive home the point that free software development didn't start and end with the GNU Project. Programming languages, such as Perl and Python, and Internet software, such as BIND, sendmail, and Apache, demonstrated that free software was already ubiquitous and influential. He also assured Raymond an even warmer reception than the one at Linux Kongress.O'Reilly was right. "This time, I got the standing ovation before the speech," says Raymond, laughing.As predicted, the audience was stocked not only with hackers, but with other people interested in the growing power of the free software movement. One contingent included a group from Netscape, the Mountain View, California startup then nearing the end game of its three-year battle with Microsoft for control of the web-browser market.Intrigued by Raymond's speech and anxious to win back lost market share, Netscape executives took the message back to corporate headquarters. A few months later, in January, 1998, the company announced its plan to publish the source code of its flagship Navigator web browser in the hopes of enlisting hacker support in future development.When Netscape CEO Jim Barksdale cited Raymond's "Cathedral and the Bazaar" essay as a major influence upon the company's decision, the company instantly elevated Raymond to the level of hacker celebrity. Determined not to squander the opportunity, Raymond traveled west to deliver interviews, advise Netscape executives, and take part in the eventual party celebrating the publication of Netscape Navigator's source code. The code name for Navigator's source code was "Mozilla": a reference both to the program's gargantuan size-30 million lines of code-and to its heritage. Developed as a proprietary offshoot of Mosaic, the web browser created by Marc Andreessen at the University of Illinois, Mozilla was proof, yet again, that when it came to building new programs, most programmers preferred to borrow on older, modifiable programs.While in California, Raymond also managed to squeeze in a visit to VA Research, a Santa Clara-based company selling workstations with the GNU/Linux operating system preinstalled. Convened by Raymond, the meeting was small. The invite list included VA founder Larry Augustin, a few VA employees, and Christine Peterson, president of the Foresight Institute, a Silicon Valley think tank specializing in nanotechnology."The meeting's agenda boiled down to one item: how to take advantage of Netscape's decision so that other companies might follow suit?" Raymond doesn't recall the conversation that took place, but he does remember the first complaint addressed. Despite the best efforts of Stallman and other hackers to remind people that the word "free" in free software stood for freedom and not price, the message still wasn't getting through. Most business executives, upon hearing the term for the first time, interpreted the word as synonymous with "zero cost," tuning out any follow up messages in short order. Until hackers found a way to get past this cognitive dissonance, the free software movement faced an uphill climb, even after Netscape.Peterson, whose organization had taken an active interest in advancing the free software cause, offered an alternative: open source.Looking back, Peterson says she came up with the open source term while discussing Netscape's decision with a friend in the public relations industry. She doesn't remember where she came upon the term or if she borrowed it from another field, but she does remember her friend disliking the term.5At the meeting, Peterson says, the response was dramatically different. "I was hesitant about suggesting it," Peterson recalls. "I had no standing with the group, so started using it casually, not highlighting it as a new term." To Peterson's surprise, the term caught on. By the end of the meeting, most of the attendees, including Raymond, seemed pleased by it.Raymond says he didn't publicly use the term "open source" as a substitute for free software until a day or two after the Mozilla launch party, when O'Reilly had scheduled a meeting to talk about free software. Calling his meeting "the Freeware Summit," O'Reilly says he wanted to direct media and community attention to the other deserving projects that had also encouraged Netscape to release Mozilla. "All these guys had so much in common, and I was surprised they didn't all know each other," says O'Reilly. "I also wanted to let the world know just how great an impact the free software culture had already made. People were missing out on a large part of the free software tradition."In putting together the invite list, however, O'Reilly made a decision that would have long-term political consequences. He decided to limit the list to west-coast developers such as Wall, Eric Allman, creator of sendmail, and Paul Vixie, creator of BIND. There were exceptions, of course: Pennsylvania-resident Raymond, who was already in town thanks to the Mozilla launch, earned a quick invite. So did Virginia-resident Guido van Rossum, creator of Python. "Frank Willison, my editor in chief and champion of Python within the company, invited him without first checking in with me," O'Reilly recalls. "I was happy to have him there, but when I started, it really was just a local gathering."For some observers, the unwillingness to include Stallman's name on the list qualified as a snub. "I decided not to go to the event because of it," says Perens, remembering the summit. Raymond, who did go, says he argued for Stallman's inclusion to no avail. The snub rumor gained additional strength from the fact that O'Reilly, the event's host, had feuded publicly with Stallman over the issue of software-manual copyrights. Prior to the meeting, Stallman had argued that free software manuals should be as freely copyable and modifiable as free software programs. O'Reilly, meanwhile, argued that a value-added market for nonfree books increased the utility of free software by making it more accessible to a wider community. The two had also disputed the title of the event, with Stallman insisting on "Free Software" over the less politically laden "Freeware."Looking back, O'Reilly doesn't see the decision to leave Stallman's name off the invite list as a snub. "At that time, I had never met Richard in person, but in our email interactions, he'd been inflexible and unwilling to engage in dialogue. I wanted to make sure the GNU tradition was represented at the meeting, so I invited John Gilmore and Michael Tiemann, whom I knew personally, and whom I knew were passionate about the value of the GPL but seemed more willing to engage in a frank back-and-forth about the strengths and weaknesses of the various free software projects and traditions. Given all the later brouhaha, I do wish I'd invited Richard as well, but I certainly don't think that my failure to do so should be interpreted as a lack of respect for the GNU Project or for Richard personally."Snub or no snub, both O'Reilly and Raymond say the term "open source" won over just enough summit-goers to qualify as a success. The attendees shared ideas and experiences and brainstormed on how to improve free software's image. Of key concern was how to point out the successes of free software, particularly in the realm of Internet infrastructure, as opposed to playing up the GNU/Linux challenge to Microsoft Windows. But like the earlier meeting at VA, the discussion soon turned to the problems associated with the term "free software." O'Reilly, the summit host, remembers a particularly insightful comment from Torvalds, a summit attendee."Linus had just moved to Silicon Valley at that point, and he explained how only recently that he had learned that the word `free' had two meanings-free as in `libre' and free as in `gratis'-in English."Michael Tiemann, founder of Cygnus, proposed an alternative to the troublesome "free software" term: sourceware. "Nobody got too excited about it," O'Reilly recalls. "That's when Eric threw out the term `open source.'"Although the term appealed to some, support for a change in official terminology was far from unanimous. At the end of the one-day conference, attendees put the three terms-free software, open source, or sourceware-to a vote. According to O'Reilly, 9 out of the 15 attendees voted for "open source." Although some still quibbled with the term, all attendees agreed to use it in future discussions with the press. "We wanted to go out with a solidarity message," O'Reilly says.The term didn't take long to enter the national lexicon. Shortly after the summit, O'Reilly shepherded summit attendees to a press conference attended by reporters from the New York Times, the Wall Street Journal, and other prominent publications. Within a few months, Torvalds' face was appearing on the cover of Forbes magazine, with the faces of Stallman, Perl creator Larry Wall, and Apache team leader Brian Behlendorf featured in the interior spread. Open source was open for business.For summit attendees such as Tiemann, the solidarity message was the most important thing. Although his company had achieved a fair amount of success selling free software tools and services, he sensed the difficulty other programmers and entrepreneurs faced."There's no question that the use of the word free was confusing in a lot of situations," Tiemann says. "Open source positioned itself as being business friendly and business sensible. Free software positioned itself as morally righteous. For better or worse we figured it was more advantageous to align with the open source crowd.For Stallman, the response to the new "open source" term was slow in coming. Raymond says Stallman briefly considered adopting the term, only to discard it. "I know because I had direct personal conversations about it," Raymond says.By the end of 1998, Stallman had formulated a position: open source, while helpful in communicating the technical advantages of free software, also encouraged speakers to soft-pedal the issue of software freedom. Given this drawback, Stallman would stick with the term free software.Summing up his position at the 1999 LinuxWorld Convention and Expo, an event billed by Torvalds himself as a "coming out party" for the Linux community, Stallman implored his fellow hackers to resist the lure of easy compromise."Because we've shown how much we can do, we don't have to be desperate to work with companies or compromise our goals," Stallman said during a panel discussion. "Let them offer and we'll accept. We don't have to change what we're doing to get them to help us. You can take a single step towards a goal, then another and then more and more and you'll actually reach your goal. Or, you can take a half measure that means you don't ever take another step and you'll never get there."Even before the LinuxWorld show, however, Stallman was showing an increased willingness to alienate his more conciliatory peers. A few months after the Freeware Summit, O'Reilly hosted its second annual Perl Conference. This time around, Stallman was in attendance. During a panel discussion lauding IBM's decision to employ the free software Apache web server in its commercial offerings, Stallman, taking advantage of an audience microphone, disrupted the proceedings with a tirade against panelist John Ousterhout, creator of the Tcl scripting language. Stallman branded Ousterhout a "parasite" on the free software community for marketing a proprietary version of Tcl via Ousterhout's startup company, Scriptics. "I don't think Scriptics is necessary for the continued existence of Tcl," Stallman said to hisses from the fellow audience members.See Malcolm Maclachlan, "Profit Motive Splits Open Source Movement," TechWeb News (August 26, 1998). http://content.techweb.com/wire/story/TWB19980824S0012"It was a pretty ugly scene," recalls Prime Time Freeware's Rich Morin. "John's done some pretty respectable things: Tcl, Tk, Sprite. He's a real contributor."Despite his sympathies for Stallman and Stallman's position, Morin felt empathy for those troubled by Stallman's discordant behavior.Stallman's Perl Conference outburst would momentarily chase off another potential sympathizer, Bruce Perens. In 1998, Eric Raymond proposed launching the Open Source Initiative, or OSI, an organization that would police the use of the term "open source" and provide a definition for companies interested in making their own programs. Raymond recruited Perens to draft the definition.See Bruce Perens et al., "The Open Source Definition," The Open Source Initiative (1998). http://www.opensource.org/docs/definition.htmlPerens would later resign from the OSI, expressing regret that the organization had set itself up in opposition to Stallman and the FSF. Still, looking back on the need for a free software definition outside the Free Software Foundation's auspices, Perens understands why other hackers might still feel the need for distance. "I really like and admire Richard," says Perens. "I do think Richard would do his job better if Richard had more balance. That includes going away from free software for a couple of months."Stallman's monomaniacal energies would do little to counteract the public-relations momentum of open source proponents. In August of 1998, when chip-maker Intel purchased a stake in GNU/Linux vendor Red Hat, an accompanying New York Times article described the company as the product of a movement "known alternatively as free software and open source."See Amy Harmon, "For Sale: Free Operating System," New York Times (September 28, 1998).http://www.nytimes.com/library/tech/98/09/biztech/articles/28linux.html Six months later, a John Markoff article on AppleComputer was proclaiming the company's adoption of the"open source" Apache server in the article headline.See John Markoff, "AppleAdopts `Open Source' for itsServer Computers," New York Times (March 17, 1999).http://www.nytimes.com/library/tech/99/03/biztech/articles/17apple.htmlSuch momentum would coincide with the growing momentum of companies that actively embraced the "open source" term. By August of 1999, Red Hat, a company that now eagerly billed itself as "open source," was selling shares on Nasdaq. In December, VA Linux-formerly VA Research-was floating its own IPO to historical effect. Opening at $30 per share, the company's stock price exploded past the $300 mark in initial trading only to settle back down to the $239 level. Shareholders lucky enough to get in at the bottom and stay until the end experienced a 698% increase in paper wealth, a Nasdaq record.Among those lucky shareholders was Eric Raymond, who, as a company board member since the Mozilla launch, had received 150,000 shares of VA Linux stock. Stunned by the realization that his essay contrasting the Stallman-Torvalds managerial styles had netted him $36 million in potential wealth, Raymond penned a follow-up essay. In it, Raymond mused on the relationship between the hacker ethic and monetary wealth: Reporters often ask me these days if I think the open-source community will be corrupted by the influx of big money. I tell them what I believe, which is this: commercial demand for programmers has been so intense for so long that anyone who can be seriously distracted by money is already gone. Our community has been self-selected for caring about other things-accomplishment, pride, artistic passion, and each other.See Eric Raymond, "Surprised by Wealth," Linux Today (December 10, 1999).http://linuxtoday.com/news_story.php3?ltsn=1999-12-10-001-05-NW-LF Whether or not such comments allayed suspicions thatRaymond and other open source proponents had simplybeen in it for the money, they drove home the opensource community's ultimate message: all you needed tosell the free software concept is a friendly face and asensible message. Instead of fighting the marketplacehead-on as Stallman had done, Raymond, Torvalds, andother new leaders of the hacker community had adopted amore relaxed approach-ignoring the marketplace in someareas, leveraging it in others. Instead of playing therole of high-school outcasts, they had played the gameof celebrity, magnifying their power in the process."On his worst days Richard believes that Linus Torvalds and I conspired to hijack his revolution," Raymond says. "Richard's rejection of the term open source and his deliberate creation of an ideological fissure in my view comes from an odd mix of idealism and territoriality. There are people out there who think it's all Richard's personal ego. I don't believe that. It's more that he so personally associates himself with the free software idea that he sees any threat to that as a threat to himself."Ironically, the success of open source and open source advocates such as Raymond would not diminish Stallman's role as a leader. If anything, it gave Stallman new followers to convert. Still, the Raymond territoriality charge is a damning one. There are numerous instances of Stallman sticking to his guns more out of habit than out of principle: his initial dismissal of the Linux kernel, for example, and his current unwillingness as a political figure to venture outside the realm of software issues.Then again, as the recent debate over open source also shows, in instances when Stallman has stuck to his guns, he's usually found a way to gain ground because of it. "One of Stallman's primary character traits is the fact he doesn't budge," says Ian Murdock. "He'll wait up to a decade for people to come around to his point of view if that's what it takes."Murdock, for one, finds that unbudgeable nature both refreshing and valuable. Stallman may no longer be the solitary leader of the free software movement, but he is still the polestar of the free software community. "You always know that he's going to be consistent in his views," Murdock says. "Most people aren't like that. Whether you agree with him or not, you really have to respect that."About HackerNoon Book Series: We bring you the most important technical, scientific, and insightful public domain books.This book is part of the public domain. Sam Williams (2004). Free as in Freedom: Richard Stallman's Crusade for Free Software. Urbana, Illinois: Project Gutenberg. Retrieved October 2022, from https://www.gutenberg.org/cache/epub/5768/pg5768.htmlThis eBook is for the use of anyone anywhere at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.org, located at https://www.gutenberg.org/policy/license.html.

Read More...
posted about 16 hours ago on hacker noon
\ In this post, we will learn to scrape Google Shopping Results using Node JS. \ Requirements: Web Parsing with CSS selectors Searching for the tags from HTML files is not only a difficult thing to do but also a time-consuming process. It is better to use the CSS Selectors Gadget for selecting the perfect tags to make your web scraping journey easier. \ This gadget can help you come up with the perfect CSS selector for your need. Here is the link to the tutorial, which will teach you to use this gadget for selecting the best CSS selectors according to your needs. User Agents User-Agent is used to identify the application, operating system, vendor, and version of the requesting user agent, which can save help in making a fake visit to Google by acting as a real user. \ You can also rotate User Agents, read more about this in this article: How to fake and rotate User Agents using Python 3. \ Install Libraries To scrape Google maps reviews, we need to install some NPM libraries so we can move forward. \ Unirest JS Cheerio JS \ So before starting, we have to ensure that we have set up our Node JS project and installed both the packages - Unirest JS and Cheerio JS. You can install both packages from the above link. Target: We will target to scrape the shopping results of Nike shoes. Process: \ We have installed all the things which we will need for our scraper. Now we will hit our target URL using Unirest JS to get our HTML data, and then we will parse our extracted HTML data with the help of Cheerio JS. \ We will target this URL: \ https://www.google.com/search?q=nike shoes&tbm=shop&gl=us \ Look at the tbm parameter and its value(shop, here). This value shop will tell Google that we are looking for shopping results. \ Open this URL in your browser. Inspect the code. You will see that every organic shopping result is inside this tag .sh-dgr__gr-auto. \ \ Now, we will search the tags for title, product link, price, rating, reviews, delivery, and source. \ The above images are in the pattern of two at the top and one at the bottom. We have completed our search for tags of organic shopping results. Now, we will search for the tags of ad results. If you inspect the ad results, you will see that all the ad results are inside the tag .sh-np__click-target. This tag contains all the information about the title, link, price, and source. All the above things make our code look like this: \ const unirest = require("unirest"); const cheerio = require("cheerio"); const getShoppingData = () => { try { return unirest .get("https://www.google.com/search?q=nike shoes&tbm=shop&gl=us") .headers({ "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.54 Safari/537.36", }) .then((response) => { let $ = cheerio.load(response.body); let ads = []; $(".sh-np__click-target").each((i,el) => { ads.push({ title: $(el).find(".sh-np__product-title").text(), link: "https://google.com" + $(el).attr("href"), source: $(el).find(".sh-np__seller-container").text(), price: $(el).find(".hn9kf").text(), delivery: $(el).find(".U6puSd").text(), }) if($(el).find(".rz2LD").length) { let extensions = [] extensions = $(el).find(".rz2LD").text() ads[i].extensions = extensions } }) for (let i = 0; i < ads.length; i++) { Object.keys(ads[i]).forEach(key => ads[i][key] === "" ? delete ads[i][key] : {}); } let shopping_results = []; $(".sh-dgr__gr-auto").each((i,el) => { shopping_results.push({ title: $(el).find(".Xjkr3b").text(), link: $(el).find(".zLPF4b .eaGTj a.shntl").attr("href").substring($(el).find("a.shntl").attr("href").indexOf("=")+1), source: $(el).find(".IuHnof").text(), price: $(el).find(".XrAfOe .a8Pemb").text(), rating: $(el).find(".Rsc7Yb").text(), reviews: $(el).find(".NzUzee div").attr("aria-label") ? $(el).find(".NzUzee div").attr("aria-label").substring(0,$(el).find(".NzUzee div").attr("aria-label").indexOf(" ")) : "", delivery: $(el).find(".vEjMR").text() }) if($(el).find(".Ib8pOd").length) { let extensions = []; extensions = $(el).find(".Ib8pOd").text(); shopping_results[i].extensions = extensions } }) for (let i = 0; i < shopping_results.length; i++) { Object.keys(shopping_results[i]).forEach(key => shopping_results[i][key] === "" ? delete shopping_results[i][key] : {}); } console.log(ads) console.log(shopping_results) }) } catch(e) { console.log(e) } } getShoppingData(); \ You can also check some of my other Google scrapers in my Git Repository: https://github.com/Darshan972/GoogleScrapingBlogs \ Result: Our result should look like this 👆🏻. With Google Shopping API \ If you don't want to code and maintain the scraper in the long run and don't want to work with complex URLs and HTML, then you can try this Google Search API. \n Serpdog | Google Search API solves all the problem of captchas and proxies and allow developers to scrape Google Search Results smoothly. Also, the pre-cooked structured JSON data can save you a lot of time. \ const axios = require('axios'); axios.get('https://api.serpdog.io/shopping?api_key=APIKEY&q=shoes&gl=us') .then(response => { console.log(response.data); }) .catch(error => { console.log(error); }); Result: Conclusion: In this tutorial, we learned to scrape Google Shopping Results using Node JS. Feel free to message me anything you need clarification on. Follow me on Twitter. Thanks for reading! \ Also published here. \ Additional Resources Scrape Google Images Results Scrape Google Maps Reviews Frequently Asked Questions Q. How do I get Google Shopping results? You can get Google Shopping Results by using Serpdog Google Shopping API without any problem with proxies and CAPTCHAs. This data is one of the great sources for data miners for competitor price tracking, sentimental analysis, etc. \ Author: My name is Darshan, and I am the founder of Serpdog. I love to create scrapers. I am currently working for several MNCs to provide them with Google Search Data through a seamless data pipeline.

Read More...
posted about 17 hours ago on hacker noon
Hello Beautiful Humans! \ It’s been quite the year so far. Feel free to pause and take a deep breath. If you’re a crypto faithful, you might need to take two, as yet another crypto villain has come to the fore in the last few weeks. \ \ While the destructive domino set off by the FTX saga continues to topple over, thousands of Twitter employees are dusting off their resumes as Elon settles into his not-so-new role. \ A lot is happening (as always) and we need YOU to help the HackerNoon community keep up. So tell us how you feel about Tech companies (and the people that run/work in them) here, or join our community to vote in the weekly HackerNoon Technology polls. \n Additionally, you can also keep up with Trending Tech Companies on our Hompage, or if you want to know how different cryptocurrencies are fairing, visit our coin pages. Tell Your Story to a Global Audience At HackerNoon, we want everyone everywhere to have access to quality stories that are changing the world (for good or bad). This is why we have used machine learning to translate all HackerNoon top stories from English to Spanish, Hindi, Mandarin, Vietnamese, French, Portuguese, and Japanese. \ :::tip If you want to get a chance to reach HackerNoon’s growing global audience but are unsure how to start, you can draw inspiration from our intuitive story templates. ::: Writing a Top Story To get your story featured at the top of HackerNoon’s homepage and translated into 7 languages, a good first step is to take a look at previously published top stories. \ You should take a quick look at pocket SEO tips from our VP of Editorial, our writing guidelines, and HackerNoon CEO David Smooke’s 3 Headline tips. You can then write your well-researched views on current events (White boy); Your technical expertise (A guide on how to build a blockchain with Javascript); Or even a story based on the results of a HackerNoon poll (33% of technologists think Section 230 reform will stifle free speech). \ Finally, be creative and authentic. \ Start Writing your HackerNoon story now. We can’t wait to see what you come up with!

Read More...
posted about 19 hours ago on hacker noon
The Palmer Method of Business Writing, by A. N. Palmer is part of the HackerNoon Books series. You can jump to any chapter in this book here. Lesson 99 LESSON 99Drill 121Practice capital P at the rate of fifty to sixty letters a minute; the word “Pulling”, twelve words a minute. Some pupils will be able to write the word at higher speed and still do good work.About HackerNoon Book Series: We bring you the most important technical, scientific, and insightful public domain books. This book is part of the public domain.Palmer, A. N. 2021. The Palmer Method of Business Writing. Urbana, Illinois: Project Gutenberg. Retrieved December 2022 from https://www.gutenberg.org/files/66476/66476-h/66476-h.htmThis eBook is for the use of anyone anywhere at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.org, located at https://www.gutenberg.org/policy/license.html.

Read More...
posted about 21 hours ago on hacker noon
Earlier this year, I created a really fun project that I called "Brain to the Cloud" where I saved my brain data to the cloud while playing Call of Duty so that I could analyze the relationship between my cognitive function and video game performance. I wrote up a three-part blog post series and created some fun videos to summarize my findings onthat project. If you'd like to check those out, you can refer to the links at the bottom of this post. A few months after I published that project, I started working at Twitch as the lead Developer Advocate for Amazon Interactive Video Service (Amazon IVS) - a fully managed solution for creating live, interactive video streaming solutions (check out this series to learn more). The next step of my "Brain to the Cloud" project was obvious - I needed to live stream my brain.Broadcasting My BrainBefore we look at the code, let's see the final product. There are 2 views for the application: a broadcasting view, and a playback view. In the broadcasting view, we can preview the live video, start the broadcast, and connect the Muse headband to stream the brain data obtained from the headband. In the playback view, we display the live stream with a element, and chart the brain data in real-timeProject OverviewThere are 5 steps to this project:Broadcast live streamCapture brain dataPublish brain data as timed metadata within the live streamPlayback live streamListen for timed metadata and render brain data in a chart in real-timeIf you prefer graphical depictions of such things, here's how this looks:Building the ProjectI used React for this project. Why? Well, I've got plenty of experience with Vue and Angular, but I'm probably one of the last developers on earth to try React. I figured it was about time to figure out what all the hype was about, and I knew that this would not be a difficult project to build with it. Due to my lack of prior experience, I'm not what you'd call an "advanced" user of the framework, but I have to say that I'm pretty happy with what I see so far. I found the process enjoyable and did not find myself "fighting" with the framework. But this blog post isn't about my opinion on JavaScript frameworks, so I'll save that for a future post. Instead, let's talk about how I broadcast my brain!The Hardware In my original "Brain to the Cloud" project, I used a "vintage" EEG headset called a MindFlex to capture my brain readings. It worked fairly well but required me to "hack" the device by adding an ESP-12 microcontroller in order to pull the readings off of the device and send them to the cloud. This time I reached for something slightly newer - and something that I could use with no modifications. After a bit of research, I settled on the Muse S Headband. Thankfully, there is a really awesome open-source library called muse-js which lets me access the brain readings directly in a web browser with Web Bluetooth (in supported browsers, of course). The Live Stream BroadcastUntil recently, live streaming with Amazon IVS required us to use a third-party client to broadcast our streams as RTMPS. But we recently launched a game-changer: the Amazon IVS Web Broadcast SDK. As the name implies, this SDK gives us the ability to broadcast our live stream via WebRTC directly from a web browser. Clearly, this was a perfect fit for live streaming my brain since it means that I can create an "all-in-one" solution for broadcasting my brain data along with my live stream without relying on third-party software or external scripts.Adding Web Broadcast to the React AppWe're not going to look at every single step required to utilize the Web Broadcast SDK in this post. Instead, we'll look at the highlights to get a general idea of how it works. Don't worry - I've got another post coming soon where we'll dig into the "step-by-step" process for using the Web Broadcast SDK, so stay tuned for that. That said, let's take a quick journey to see how I used the SDK in this project. My first step was to use a web broadcast to install the amazon-ivs-web-broadcast module. Using your favorite package management tool, run:$ npm install amazon-ivs-web-broadcastNext, we need to import it into our component. In my Broadcast.jsx component, I added:import IVSBroadcastClient, { STANDARD_LANDSCAPE } from 'amazon-ivs-web-broadcast'; We can create an instance of the IVSBroadcastClient with the desired stream configuration and ingest the endpoint from our Amazon IVS channel and set it into our component's state.this.setState({ broadcastClient: IVSBroadcastClient.create({ streamConfig: STANDARD_LANDSCAPE, ingestEndpoint: this.state.ingestEndpoint, }) }); Now that we've got an instance of the client, we can add our camera to the client. For this we use navigator.mediaDevices.getUserMedia().const streamConfig = STANDARD_LANDSCAPE; const videoStream = await navigator.mediaDevices.getUserMedia({ video: { deviceId: { exact: this.state.selectedVideoDeviceId }, width: { ideal: streamConfig.maxResolution.width, max: streamConfig.maxResolution.width, }, height: { ideal: streamConfig.maxResolution.height, max: streamConfig.maxResolution.height, }, }, }); this.state.broadcastClient.addVideoInputDevice(videoStream, 'camera1', { index: 0 }); Adding the user's microphone to the client follows a similar pattern.const audioStream = await navigator.mediaDevices.getUserMedia({ audio: { deviceId: this.state.selectedAudioDeviceId }, }); this.state.broadcastClient.addAudioInputDevice(audioStream, 'mic1'); Note: Because of the browser security model, we need to get permissions to access the user's camera and microphone. Refer to the project source on GitHub for more information on this, and to see how I captured a list of devices and presented them in a dialog to allow the user to choose the broadcast device if multiple options are available.Now we can add a live preview to the page so that we can see what our viewers will ultimately see on the player side of things.this.previewRef} id='broadcast-preview'> And attach the preview to the broadcastClient:this.state.broadcastClient.attachPreview(this.previewRef.current); To start the broadcast, add a button to the page, and in the onClick handler for the button call startBroadcast() on the broadcastClient (passing the necessary streamKey).this.state.broadcastClient.startBroadcast(this.state.streamKey); Obtaining My Brain DataAs I mentioned above, I used the muse-js library, which provides the ability to connect to the headband and pull the raw data. However, muse-js does not calculate the absolute band powers for the EEG data. For this, I needed to reach for another library: eeg-pipes. The first step is to add and import the libraries.$ npm install muse-js $ npm install @neurosity/pipesimport { zipSamples, MuseClient } from 'muse-js'; import { powerByBand, epoch, fft } from '@neurosity/pipes'; Next, I added a button with a click handler. In the handler, I connect to the headset, start listening for data, and subscribe to the stream.const client = new MuseClient(); await client.connect(); await client.start(); zipSamples(client.eegReadings) .pipe( epoch({ duration: 1024, interval: 250, samplingRate: 256 }), fft({ bins: 256 }), powerByBand(), ) .subscribe( (data) => { const ch0 = [data.delta[0], data.theta[0], data.alpha[0], data.beta[0], data.gamma[0]]; const ch1 = [data.delta[1], data.theta[1], data.alpha[1], data.beta[1], data.gamma[1]]; const ch2 = [data.delta[2], data.theta[2], data.alpha[2], data.beta[2], data.gamma[2]]; const ch3 = [data.delta[3], data.theta[3], data.alpha[3], data.beta[3], data.gamma[3]]; const meta = [ch0, ch1, ch2, ch3]; //publish metadata } ); Publishing my Brain Data as Timed MetadataNow that I've got a handler that collects my brain data from the Muse headband, it's time to publish that data as timed metadata in the live stream. The awesome thing about timed metadata is that it is directly embedded in the video stream, and remains a permanent part of that stream. That means that it exists even in recorded versions, meaning that even in on-demand playback we can listen for and respond to the events.The Web Broadcast SDK does not support publishing timed metadata from the client side, so we'll have to use putMetadata (docs) via the AWS SDK for JavaScript. For this, I created an AWS Lambda function.const AWS = require('aws-sdk'); const ivs = new AWS.IVS({ apiVersion: '2020-07-14', region: 'us-east-1' }); exports.send = async (event, context, callback) => { // response object const response = { 'statusCode': 200, 'headers': { 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Methods': 'OPTIONS,GET,PUT,POST,DELETE', 'Content-Type': 'application/json' }, 'body': '', 'isBase64Encoded': false }; // parse payload let payload; try { payload = JSON.parse(event.body); } catch (err) { response.statusCode = 500; response.body = JSON.stringify(err); callback(null, response); return; } // validate payload if (!payload || !payload.channelArn || !payload.metadata) { response.statusCode = 400; response.body = 'Must provide, channelArn and metadata'; callback(null, response); return; } // check payload size let byteLength = Buffer.byteLength(payload.metadata, 'utf8'); if (byteLength > 1024) { response.statusCode = 400; response.body = 'Too big. Must be less than or equal to 1K'; callback(null, response); return; } // putmetadata input let params = { channelArn: payload.channelArn, metadata: payload.metadata }; try { await ivs.putMetadata(params).promise(); response.statusCode = 200; response.body = JSON.stringify({'published': true}, '', 2); callback(null, response); } catch(err) { response.statusCode = 500; response.body = err.stack; callback(null, response); return; } }; To publish my brain data as timed metadata, I created an Amazon API Gateway to invoke the function and modify the subscribe() method above to call the AWS Lambda function.zipSamples(client.eegReadings) .pipe( epoch({ duration: 1024, interval: 250, samplingRate: 256 }), fft({ bins: 256 }), powerByBand(), ) .subscribe( (data) => { const ch0 = [data.delta[0], data.theta[0], data.alpha[0], data.beta[0], data.gamma[0]]; const ch1 = [data.delta[1], data.theta[1], data.alpha[1], data.beta[1], data.gamma[1]]; const ch2 = [data.delta[2], data.theta[2], data.alpha[2], data.beta[2], data.gamma[2]]; const ch3 = [data.delta[3], data.theta[3], data.alpha[3], data.beta[3], data.gamma[3]]; const meta = [ch0, ch1, ch2, ch3]; // put metadata if broadcasting if(this.state.isBroadcasting) { fetch(LAMBDA_URL, { 'method': 'POST', 'mode': 'no-cors', 'headers': { 'Content-Type': 'application/json', }, 'body': JSON.stringify({ channelArn: this.state.channelArn, metadata: JSON.stringify(meta) }) }); } } ); Building the Live Stream Playback and Charting My Brain DataOnce the live stream with brain data broadcast view was complete, it was time to create a playback experience that would display the live stream and chart the brain data in real time as it came in via timed metadata.Creating The Live Stream PlayerWe can use the IVS Web Player SDK via NPM, but since it uses WebAssembly things can get tricky. To avoid that trickiness, I find it easier to use the web player via a tag and I added that to my index.html in my React app."https://player.live-video.net/1.12.0/amazon-ivs-player.min.js"> In my Playback.jsx component, I grab a reference to the player and some necessary elements.const { IVSPlayer } = window; const { create: createMediaPlayer, isPlayerSupported, PlayerEventType, PlayerState } = IVSPlayer; const { ENDED, PLAYING, READY, BUFFERING } = PlayerState; const { TEXT_METADATA_CUE, ERROR } = PlayerEventType; For playback, we use the native tag.this.videoRef} controls playsInline> And to initialize the player and start playback:this.playerRef.current = createMediaPlayer(); this.playerRef.current.attachHTMLVideoElement(this.videoRef.current); this.playerRef.current.load(STREAM_URL); this.playerRef.current.play(); Listening and Responding to Timed MetadataNow that we're playing the live stream, we can listen for and respond to the incoming brain data.this.playerRef.current.addEventListener(TEXT_METADATA_CUE, this.onPlayerMetadata); Set the brain data into our component state:onPlayerMetadata = (e) => { //console.log(e); const data = JSON.parse(e.text); this.setState(state => { state.ch0.datasets[0].data = data[0]; state.ch1.datasets[0].data = data[1]; state.ch2.datasets[0].data = data[2]; state.ch3.datasets[0].data = data[3]; this.chartReferenceCh0.current.data.datasets[0].data = state.ch0.datasets[0].data; this.chartReferenceCh1.current.data.datasets[0].data = state.ch1.datasets[0].data; this.chartReferenceCh2.current.data.datasets[0].data = state.ch2.datasets[0].data; this.chartReferenceCh3.current.data.datasets[0].data = state.ch3.datasets[0].data; return ({ ch0: state.ch0, ch1: state.ch1, ch2: state.ch2, ch3: state.ch3 }); }); }; And render it with a bar chart (with Chart.js): data={this.state.ch0} ref={this.chartReferenceCh0} options={ { aspectRatio: 1, title: { display: true, text: 'Channel: ' + channelNames[0] }, responsive: true, tooltips: { enabled: false }, legend: { display: false } } } /> The visualization is cool, and certainly provides a fun way to see my brain data while I'm live streaming a game, but doesn't provide a ton of context. So I figured it would make sense to include some calculations to give insight into what the data actually means. For that, I found some calculations in the muse-lsl project on GitHub which included some formulas that can be used to calculate factors like relaxation (alpha divided by delta), and concentration (beta divided by theta). Another great blog post I found highlighted a way to derive fatigue ((theta + alpha) / beta). I wrapped these calculations up in a handy, reusable component.'mb-2'> {/* Delta: 0 Theta: 1 Alpha: 2 Beta: 3 Gamma: 4 */} 12} xxl={4} className='align-items-center mb-2 mb-xxl-0'> Relaxation: Fatigue: Focus: SummaryIn this post, we looked at how I created a React application to live stream my brain data with Amazon IVS. If you'd like to learn more about Amazon IVS, please check out the series Getting Started with Amazon Interactive Video Service here on dev.to. If you're interested in trying out the application or just checking out the full source for the application, check it out on GitHub. Your comments, questions and feedback is always welcome, so leave a comment here or connect with me on TwitterLinksBrain to the Cloud - Part I - Project Intro and Architectural OverviewBrain to the Cloud - Part II - How I Uploaded My Brain to the CloudBrain to the Cloud - Part III - Examining the Relationship Between Brain Activity and Video Game PerformanceProject Source CodeFirst Published here.

Read More...
posted about 23 hours ago on hacker noon
The attacker's perspective on K8S cluster security (Part 1) summarizes the attack methods on K8S components, node external services, business pods, and container escape methods in the K8S cluster, corresponding to attack points. This article will continue to introduce attack points, namely lateral attacks, attacks on the K8S management platform, attacks on image libraries, and attacks on third-party components. K8S Cluster Attack Point Attack point: Lateral attack Attack other services There are often some internal services exposed through ClusterIP in the cluster. These services cannot be scanned outside the cluster, but some sensitive services may be found in the internal pod through the information collection method mentioned above, such as by scanning ports. Or look at environment variables etc. \n Earlier, we found the address of the mysql service in the environment variables of the target pod during an internal penetration test, as shown in Figure \n \n And successfully logged in to the mysql database by trying a weak password: \n Attack API Server The communication between the pod of the K8S cluster and the API Server is verified by the token of the ServiceAccount. There is an attack point here. If the ServiceAccount of the pod is too large, it can communicate with the API Server with high authority, and it is possible to view some sensitive information of the cluster. Or perform privileged operations or even further control the cluster. \n The token is saved in the /run/secrets/kubernetes.io/serviceaccount/token file of the pod by default. In an actual attack, the address of the API Server can generally be viewed directly in the pod through environment variables, as shown in Figure \n \n Speaking of the intranet ip address, please add that if you want to attack the kubelet of the current node in the pod, the ip address of the docker0 bridge can generally be directly used: 172.17.0.1. Figure  demonstrates accessing port 10250 of the current node kubelet within a pod: \n \n Man-in-the-middle attack A man-in-the-middle attack is a classic attack method. We know that in the K8S cluster, network plug-ins such as Flannel, Calico, and Cilium also need to be used to achieve network communication between pods. Is there any possibility of a man-in-the-middle attack in the network communication within the K8S cluster? The answer is possible. \n In a K8S cluster with the default configuration, if an attacker obtains the authority of a pod, it is possible to hijack the DNS of other pods through a man-in-the-middle attack. \n In addition, K8S has also exposed vulnerabilities of man-in-the-middle attacks in the early years, such as CVE-2020-8554 and CVE-2020-10749. Attack Point : Attack the K8S management platform In addition to the officially launched  Dashboard, there are many  K8S  management platforms, such as  Rancher, KubeSphere, KubeOperator, etc. K8S  management platforms can attack the most common Dashboard unauthorized access, as well as weak password login. \n The figure is the management interface of Rancher. If you successfully log in to the management background through a weak password, you can create a privileged container and escape afterward as if the Dashboard was not authorized. \n \n Management platforms such as Rancher directly control the entire cluster. Once a security problem occurs, the harm is very serious. From a security point of view, the management platform should try to avoid being exposed to the external network. Attack Point: Attack the mirror library Upload malicious images Uploading a malicious image is also called image poisoning, which means that an attacker uploads a malicious image to a public repository or the victim's local repository, and then disguises the malicious image as a normal image to guide the victim to use the image to create a container, thereby achieving intrusion. According to the purpose of intrusion, malicious images can generally be divided into two types: malicious backdoor images that invade the container and malicious EXP images that invade the host. Malicious backdoor image This type of malicious image is mainly used to control the container. Generally, after the victim uses the image to start the container, it will bounce a container shell to the attacker. In this case, the attacker may be deploying a mining program or attacking the business running in the container. Malicious EXP image Such malicious images are no longer satisfied with only invading the container because the exploit hidden in it is often to exploit the container escape vulnerability, which is intended to gain control of the host. Using Nday to attack the mirror library This refers to attacking the victim's local image repository, such as Harbor, Nexus, etc. There was a privilege escalation vulnerability in the Harbor mirror library, and the harm was very serious. \n The following are the loopholes exposed by Harbor in the 6 years since the release of the statistics on the Internet for reference: \n | CVE number | type | Risk level | |----|----|----| | CVE-2019-16097 | escalation of rights | medium risk | | CVE-2019-16919 | escalation of rights | medium risk | | CVE-2019-3990 | Username enumeration | medium risk | | CVE-2019-19025 | CSRF | medium risk | | CVE-2019-19026 | SQL injection | medium risk | | CVE-2019-19029 | SQL injection | medium risk | | CVE-2020-13788 | SSRF | medium risk | | CVE-2020-13794 | Username enumeration | medium risk | | CVE-2020-29662 | unauthorized access | medium risk | | CVE-2019-19023 | escalation of rights | medium risk | | CVE-2019-19030 | enumerate | low risk | Among them, CVE-2020-13794 can enumerate user information, and hackers can further perform brute force cracking. Although it is not a high-risk vulnerability, it is considered to be the most influential vulnerability [2]. Coincidentally, in our previous penetration test, Just encountered the Harbor mirror library with CVE-2020-13794. \n The following is a brief demonstration of the verification of the vulnerability: First, register two accounts cstest and cstest2 on Harbor, Then execute the following command on the local attack machine: curl -X GET "http://[victim-ip]/api/users/search?username=_" -H "accept: application/json" --user cstest:Test123456 See that the ids and usernames of all users are returned: Attack Point: Attack third-party components Some third-party components are also used in the K8S ecosystem, such as service meshes, API gateways, etc. These components may also have vulnerabilities, such as the RCE vulnerability of the open source API gateway Apache APISIX, and the unauthorized access or RCE vulnerability of the service mesh Istio. This article is limited in space and will not be discussed in detail. Interested readers can find out by themselves. Summarize At present, cloud-native attack and defense technologies and products are in a stage of rapid development, and many excellent utilization tools and detection tools have emerged in the community. The emergence of these tools has greatly lowered the threshold for attacks, and as cloud security has received more attention, more and more new attack technologies have appeared. How to ensure security on the cloud has become a common concern in the industry. \n This article (Part 1 and Part 2) systematically summarizes 12 common attack points in K8S clusters and discusses various risks in cloud-native scenarios based on practical experience. Compared with the traditional intranet scenario, it can be seen that the architecture in this scenario is more complex and the attack surface is more. In the face of such a complex cloud-native application security scenario, the relevant security personnel should first grasp the security system of the business architecture in the cloud scenario as a whole. Adhering to the defense concepts and principles of defense-in-depth and least privilege, we will build a more comprehensive cloud security protection system. Reference Link https://github.com/danielsagi/kube-dnsspoof http://blog.nsfocus.net/harbor-2/ https://cdmana.com/2022/04/202204080012024709.html https://chowdera.com/2022/157/202206061340046327.html \ :::info Also published here. ::: \

Read More...
posted about 23 hours ago on hacker noon
As a representative of cloud-native management and orchestration systems, Kubernetes (K8S for short) is receiving more and more attention. A report [1] shows that 96% of organizations are using or evaluating K8S, and its market share in production environments is Visible. The functions of K8S are very powerful, and its system complexity is also high. Generally speaking, the more complex the program, the easier it is to have security problems. Naturally, K8S clusters also face serious security threats, such as unauthorized access to K8S components, container escape, and lateral attacks. We say that offense and defense are mutually reinforcing and coexisting. As relevant security personnel, we should first grasp the security threats that the business architecture may face as a whole before we can do an excellent job in defense. This article will talk about the possible attack points under the K8S cluster architecture from the perspective of an attacker. Based on previous penetration testing experience, we have sorted out the possible security issues under the K8S cluster architecture and marked the potential attack points in the K8S cluster infrastructure This article is divided into two parts: the first part and the second part. This part is the first part. It mainly introduces attacks on K8S components, external services of nodes, business pods, and container escape, which correspond to attack points 1-7 in Figure 1. The rest will be introduced in the next chapter. \ \ K8S cluster attack point Attack point: attack K8S components The problem of K8S components mainly refers to the insecure configuration of each component. Attack points 1~4 list four representative component problems, namely API Server unauthorized access, etcd unauthorized access, kubelet unauthorized access, and Kube- insecure proxy configuration. In addition, there are many components that have similar security problems. For example, dashboard, docker and other components also have hidden dangers of unauthorized access. These are important system components of the K8S cluster. Once attacked, they can be directly attacked. Obtain the permissions of the corresponding cluster, node, or container. Table 1 collects the default ports with hidden dangers of each component for reference: | component name | default port | |----|----| | api server | 8080/6443 | | dashboard | 8001 | | kubelet | 10250/10255 | | etcd | 2379 | | be a proxy | 8001 | | docker | 2375 | | kube-scheduler | 10251 | | kube-controller-manager | 10252 | Attack point: Attack node external service In addition to normal external business, there may also be some "hidden" open services, which should not be exposed to the external network. This situation may be caused by the administrator's negligence, or some deliberately reserved for the convenience of management. The interface, in short, is also a potential attack point. For example, I have encountered the problem of weak password login in Mysql external service before: one of the nodes of the target system maps the Mysql service port externally through NodePort, and after trying, it can log in through a weak password. This situation belongs to the lack of security awareness of the administrator. Speaking of attacks on Mysql, we have summarized three Mysql attack paths for reference in the previous penetration test process, as shown in Figure 2: \ \ \ Directly access Mysql through externally exposed interfaces such as NodePort, and log in to the database through a weak password; (corresponding to step 1 in Figure 2) Attack the application, obtain the shell of the pod, find the intranet address of the Mysql service through the environment variable in the pod, and then try to log in with a weak password; (corresponding to steps 2-1 and 2-2 in Figure 2) Attack the application, obtain the shell of the pod, and successfully escape to the node. Use docker inspect to view or directly enter the Mysql container running on the current node. You can see that its environment variables save the database name, root password, and database login address, etc. Information (provided that the Mysql container and the application container are deployed on the same node, and whether sensitive information such as database passwords will be stored in the environment variables depends on the specific configuration of the Mysql container). (corresponding to steps 3-1, 3-2, 3-3 in Figure 2) Attack point 6: attacking service pods In the cloud-native environment, the upper-layer applications are like the entrances of the cluster to the attacker. The goal of attacking the application is to break through the entrance and get to the business environment, that is, the shell of the pod where the business is located. With the development of web security for so many years, there are many vulnerabilities that can be exploited, not to mention, such as the Log4j2-RCE vulnerability (CVE-2021-44228) exposed at the end of 2021 and the recent Spring-RCE vulnerability (CVE-2022) -22965), its harm is very large, and its exploitation is also very simple (for example, the Log4j2 vulnerability, although the exploitation method is different in the high and low version of the JDK environment, there are already a lot of ready-made EXPs available on the Internet), Once successfully exploited, it can completely take over the entire business pod. Although the permissions after entering the pod are still limited, it has finally entered the cluster. Next, you can try more attack methods, such as lateral, escape, etc., to gradually expand the results until you can control the entire cluster. Before that, we can also implement some attacks locally on the pod, such as information collection, privilege escalation, and denial of service. Information collection When entering a new pod, the first thing to do should be to gather information about the current environment. The first is to collect environmental information to prepare for subsequent attacks. Here is some more valuable information for reference: \ OS, Kernel, User basic information Available Capabilities Available Linux Commands Mounting situation Network situation Cloud vendor metadata API information \ The second is sensitive service discovery and sensitive information scanning. Sensitive service discovery can be done by scanning the ports of the specified network segment in the intranet. In addition to the ports of the K8S components, there are the following common service ports: \ ssh:22 http:80/8080 https:443/8443 mysql:3306 cAdvisor:4194 nodeport-service:30000-32767 \ Sensitive information includes business-related sensitive files (such as code, database, AK/secret or important configuration files involved in business), environment variables (which may expose some sensitive service information), K8S ServiceAccount information (stored in /run by default) /secrets/kubernetes.io/serviceaccount/ directory), process information (with or without sensitive services), etc. Privilege Escalation There are two types of privilege escalation in K8S, one is in-pod privilege escalation, and the other is K8S privilege escalation. Privilege escalation in Pod Privilege escalation in a pod is similar to traditional Linux privilege escalation, which is to elevate the shell of an ordinary user in a pod to a shell with root privileges. Generally speaking, even if you get the shell of the pod, you can only have the privileges of ordinary users. At this time, the things you can do are still very limited, so you need to use the privilege escalation vulnerability to gain root privileges. There are many ways to escalate privileges, such as kernel vulnerability escalation, sudo escalation, suid escalation, cronjob escalation, etc. It is worth mentioning that some kernel vulnerabilities can also be used for container escape, such as the famous DirtyCow (CVE-2016-5195), DirtyPipe (CVE-2022-0847), etc., which will be mentioned in the "Container escape" section below. arrive. K8S Privilege Escalation There are many ways and scenarios for K8S privilege escalation, such as RBAC privilege escalation [2], and some Days for K8S privilege escalation, such as CVE-2018-1002105, CVE-2020-8559, etc. Denial of Service Denial of Service (DOS) attacks can be viewed from three levels: business, pod, and cluster. DOS attacks on services and pods can be carried out by using some stress testing tools or methods, mainly from resource exhaustion attacks on CPU, memory, storage, network and so on. There are corresponding tools or methods outside the cluster or within the pod, and readers can search for them by themselves. DOS attacks at the cluster level can mainly exploit the software vulnerabilities of K8S clusters, such as CVE-2019-11253, CVE-2019-9512, CVE-2019-9514, etc. Attack point 7: Container escape In cloud attack and defense, getting the shell of a container/pod is often only the first step to a successful attack, because a container is essentially a process in Linux, but due to the limitations of mechanisms such as Namespace and Cgroup, the process permissions in the container are limited. It is very low. Container escape is to break through these restrictions, so in fact, container escape can also be considered as a kind of privilege escalation. The reasons for container escape can be summarized into the following three categories: container insecure configuration, related component vulnerabilities and kernel vulnerabilities. Insecure configuration of the container The container insecure configuration is divided into two cases. The first case is that the container is given dangerous permissions, and the second case is that the container is mounted with a dangerous directory. The details are shown in Table 2: | category | insecure configuration | |----|----| | Dangerous permissions | Privileged container | | \n | capsysadmin | | \n | capdacreadsearch | | \n | capsysmodule | | \n | capsys_ptrace && --pid=host | | Dangerous Mount | mount docker.sock | | \n | mount procfs | | \n | Mount / , /root , /etc and other file directories | | \n | Mount lxcfs in rw mode | | \n | pod mounts /var/log \n | Dangerous permissions refer to privileged permissions (privileged containers) and dangerous Capabilities permissions (such as capsysadmin, capsysmodule, capsysdac_search, etc.), which can be set through startup parameters when the container starts. As mentioned above, a container is essentially a restricted process. In addition to restricting namespaces and resources through Namespace and Cgroup, there are also security mechanisms such as Capabilities, Apparmor, and Seccomp that restrict the permissions of processes in the container. With the above dangerous permissions, the security mechanism that restricts the container is broken, which opens the door for attackers. Mounting a dangerous directory by a container can cause the container filesystem isolation to be broken and thus gain privileges. For example, if /var/run/docker.sock is mounted, then communication with the docker daemon is possible within the container, and an attacker can create a privileged container and escape. Mentioned here are some of the most common insecure configurations in container escape attack methods. In addition, CIS Docker Benchmark[3] proposes hundreds of security configuration benchmarks for docker containers. For vulnerability protection, security configuration issues are often easier to ignore. For attackers, the insecure configuration of containers is often easier to detect and exploit than the related program vulnerabilities and kernel vulnerabilities mentioned below. Vulnerabilities of related components The container cluster environment contains a lot of component programs, which cooperate with each other to form a huge container service ecosystem. These components include but are not limited to runc, containerd, docker, kubelet, etc. Any program will have vulnerabilities, and container-related component programs are no exception. However, compared with insecure container configurations, most of these vulnerabilities are more difficult to exploit. For example, CVE-2019-5736 requires the interaction between the host and the container to trigger. , and the vulnerability is "one-time use" and easy to expose because it breaks runc. Table 3 summarizes common vulnerabilities for some related components: \ \ | components | Vulnerability | |----|----| | run | CVE-2019-5736 | | \n | CVE-2019-16884 | | \n | CVE-2021-30465 | | containerd | CVE-2020-15257 | | \n | CVE-2022-23648 | | CRY IT | CVE-2022-0811 | | docker | CVE-2018-15664 | | \n | CVE-2019-14271 | | kubectl | CVE-2018-1002100 | | \n | CVE-2019-1002101 | | \n | CVE-2019-11246 | | \n | CVE-2019-11249 | | \n | CVE-2019-11251 | | kubelet | CVE-2017-1002101 | | \n | CVE-2021-25741 | Kernel vulnerability The biggest difference between a container and a virtual machine is that the container and the host share the kernel. If the kernel of the host has a vulnerability, all containers on the host will be affected. However, not all kernel vulnerabilities can be used for container escape. Here are some known kernel vulnerabilities that can be used for container escape: \ CVE-2016-5195 CVE-2017-1000112 CVE-2017-7308 CVE-2020-14386 CVE-2021-22555 CVE-2022-0185 CVE-2022-0492 CVE-2022-0847 \ Similarly, the EXP of kernel vulnerabilities is also risky to exploit, and may even crash the target system if blindly attempted (especially if it is an internal penetration test or a security check of the system). Summarize This article summarizes and shares the methods and experience of attacking K8S components, external services of nodes, business pods, and container escaping in a K8S cluster, namely attack points 1-7 in Figure 1. In the next part, we will continue to talk about attack points 8~12 in Figure 1, including lateral attacks, as well as attacks on the K8S management platform, image library, and third-party components. Reference link \ https://www.cncf.io/wp-content/uploads/2022/02/CNCF-Annual-Survey-2021.pdf https://published-prd.lanyonevents.com/published/rsaus20/sessionsFiles/18100/2020USA20DSO-W0101Compromising Kubernetes Cluster by Exploiting RBAC Permissions.pdf https://github.com/dev-sec/cis-docker-benchmark https://github.com/cdk-team/CDK \ :::info Also published here. ::: \

Read More...