Monday, December 26, 2022

What to expect from AI in 2023

Discover the best digital marketing tools, normally $27 as my free gift. SEO|Success

As a rather commercially successful author once wrote, “the night is dark and full of terrors, the day bright and beautiful and full of hope.” It’s fitting imagery for AI, which like all tech has its upsides and downsides.

Art-generating models like Stable Diffusion, for instance, have led to incredible outpourings of creativity, powering apps and even entirely new business models. On the other hand, its open source nature lets bad actors to use it to create deepfakes at scale — all while artists protest that it’s profiting off of their work.

What’s on deck for AI in 2023? Will regulation rein in the worst of what AI brings, or are the floodgates open? Will powerful, transformative new forms of AI emerge, a la ChatGPT, disrupt industries once thought safe from automation?

Expect more (problematic) art-generating AI apps

With the success of Lensa, the AI-powered selfie app from Prisma Labs that went viral, you can expect a lot of me-too apps along these lines. And expect them to also be capable of being tricked into creating NSFW images, and to disproportionately sexualize and alter the appearance of women.

Maximilian Gahntz, a senior policy researcher at the Mozilla Foundation, said he expected integration of generative AI into consumer tech will amplify the effects of such systems, both the good and the bad.

Stable Diffusion, for example, was fed billions of images from the internet until it “learned” to associate certain words and concepts with certain imagery. Text-generating models have routinely been easily tricked into espousing offensive views or producing misleading content.

Mike Cook, a member of the Knives and Paintbrushes open research group, agrees with Gahntz that generative AI will continue to prove a major — and problematic — force for change. But he thinks that 2023 has to be the year that generative AI “finally puts its money where its mouth is.”

Prompt by TechCrunch, model by Stability AI, generated in the free tool Dream Studio.

“It’s not enough to motivate a community of specialists [to create new tech] — for technology to become a long-term part of our lives, it has to either make someone a lot of money, or have a meaningful impact on the daily lives of the general public,” Cook said. “So I predict we’ll see a serious push to make generative AI actually achieve one of these two things, with mixed success.”

Artists lead the effort to opt out of data sets

DeviantArt released an AI art generator built on Stable Diffusion and fine-tuned on artwork from the DeviantArt community. The art generator was met with loud disapproval from DeviantArt’s longtime denizens, who criticized the platform’s lack of transparency in using their uploaded art to train the system.

The creators of the most popular systems — OpenAI and Stability AI — say that they’ve taken steps to limit the amount of harmful content their systems produce. But judging by many of the generations on social media, it’s clear that there’s work to be done.

“The data sets require active curation to address these problems and should be subjected to significant scrutiny, including from communities that tend to get the short end of the stick,” Gahntz said, comparing the process to ongoing controversies over content moderation in social media.

Stability AI, which is largely funding the development of Stable Diffusion, recently bowed to public pressure, signaling that it would allow artists to opt out of the data set used to train the next-generation Stable Diffusion model. Through the website HaveIBeenTrained.com, rightsholders will be able to request opt-outs before training begins in a few weeks’ time.

OpenAI offers no such opt-out mechanism, instead preferring to partner with organizations like Shutterstock to license portions of their image galleries. But given the legal and sheer publicity headwinds it faces alongside Stability AI, it’s likely only a matter of time before it follows suit.

The courts may ultimately force its hand. In the U.S. Microsoft, GitHub and OpenAI are being sued in a class action lawsuit that accuses them of violating copyright law by letting Copilot, GitHub’s service that intelligently suggests lines of code, regurgitate sections of licensed code without providing credit.

Perhaps anticipating the legal challenge, GitHub recently added settings to prevent public code from showing up in Copilot’s suggestions and plans to introduce a feature that will reference the source of code suggestions. But they’re imperfect measures. In at least one instance, the filter setting caused Copilot to emit large chunks of copyrighted code including all attribution and license text.

Expect to see criticism ramp up in the coming year, particularly as the U.K. mulls over rules that would that would remove the requirement that systems trained through public data be used strictly non-commercially.

Open source and decentralized efforts will continue to grow

2022 saw a handful of AI companies dominate the stage, primarily OpenAI and Stability AI. But the pendulum may swing back towards open source in 2023 as the ability to build new systems moves beyond “resource-rich and powerful AI labs,” as Gahntz put it.

A community approach may lead to more scrutiny of systems as they are being built and deployed, he said: “If models are open and if data sets are open, that’ll enable much more of the critical research that has pointed to a lot of the flaws and harms linked to generative AI and that’s often been far too difficult to conduct.”

OpenFold

Image Credits: Results from OpenFold, an open source AI system that predicts the shapes of proteins, compared to DeepMind’s AlphaFold2.

Examples of such community-focused efforts include large language models from EleutherAI and BigScience, an effort backed by AI startup Hugging Face. Stability AI is funding a number of communities itself, like the music-generation-focused Harmonai and OpenBioML, a loose collection of biotech experiments.

Money and expertise are still required to train and run sophisticated AI models, but decentralized computing may challenge traditional data centers as open source efforts mature.

BigScience took a step toward enabling decentralized development with the recent release of the open source Petals project. Petals lets people contribute their compute power, similar to Folding@home, to run large AI language models that would normally require an high-end GPU or server.

“Modern generative models are computationally expensive to train and run. Some back-of-the-envelope estimates put daily ChatGPT expenditure to around $3 million,” Chandra Bhagavatula, a senior research scientist at the Allen Institute for AI, said via email. “To make this commercially viable and accessible more widely, it will be important to address this.”

Chandra points out, however, that that large labs will continue to have competitive advantages as long as the methods and data remain proprietary. In a recent example, OpenAI released Point-E, a model that can generate 3D objects given a text prompt. But while OpenAI open sourced the model, it didn’t disclose the sources of Point-E’s training data or release that data.

OpenAI Point-E

Point-E generates point clouds.

“I do think the open source efforts and decentralization efforts are absolutely worthwhile and are to the benefit of a larger number of researchers, practitioners and users,” Chandra said. “However, despite being open-sourced, the best models are still inaccessible to a large number of researchers and practitioners due to their resource constraints.”

AI companies buckle down for incoming regulations

Regulation like the EU’s AI Act may change how companies develop and deploy AI systems moving forward. So could more local efforts like New York City’s AI hiring statute, which requires that AI and algorithm-based tech for recruiting, hiring or promotion be audited for bias before being used.

Chandra sees these regulations as necessary especially in light of generative AI’s increasingly apparent technical flaws, like its tendency to spout factually wrong info.

“This makes generative AI difficult to apply for many areas where mistakes can have very high costs — e.g. healthcare. In addition, the ease of generating incorrect information creates challenges surrounding misinformation and disinformation,” she said. “[And yet] AI systems are already making decisions loaded with moral and ethical implications.”

Next year will only bring the threat of regulation, though — expect much more quibbling over rules and court cases before anyone gets fined or charged. But companies may still jockey for position in the most advantageous categories of upcoming laws, like the AI Act’s risk categories.

The rule as currently written divides AI systems into one of four risk categories, each with varying requirements and levels of scrutiny. Systems in the highest risk category, “high-risk” AI (e.g. credit scoring algorithms, robotic surgery apps), have to meet certain legal, ethical and technical standards before they’re allowed to enter the European market. The lowest risk category, “minimal or no risk” AI (e.g. spam filters, AI-enabled video games), imposes only transparency obligations like making users aware that they’re interacting with an AI system.

Os Keyes, a Ph.D. Candidate at the University of Washington, expressed worry that companies will aim for the lowest risk level in order to minimize their own responsibilities and visibility to regulators.

“That concern aside, [the AI Act] really the most positive thing I see on the table,” they said. “I haven’t seen much of anything out of Congress.”

But investments aren’t a sure thing

Gahntz argues that, even if an AI system works well enough for most people but is deeply harmful to some, there’s “still a lot of homework left” before a company should make it widely available. “There’s also a business case for all this. If your model generates a lot of messed up stuff, consumers aren’t going to like it,” he added. “But obviously this is also about fairness.”

It’s unclear whether companies will be persuaded by that argument going into next year, particularly as investors seem eager to put their money beyond any promising generative AI.

In the midst of the Stable Diffusion controversies, Stability AI raised $101 million at an over-$1 billion valuation from prominent backers including Coatue and Lightspeed Venture Partners. OpenAI is said to be valued at $20 billion as it enters advanced talks to raise more funding from Microsoft. (Microsoft previously invested $1 billion in OpenAI in 2019.)

Of course, those could be exceptions to the rule.

Jasper AI

Image Credits: Jasper

Outside of self-driving companies Cruise, Wayve and WeRide and robotics firm MegaRobo, the top-performing AI firms in terms of money raised this year were software-based, according to Crunchbase. Contentsquare, which sells a service that provides AI-driven recommendations for web content, closed a $600 million round in July. Uniphore, which sells software for “conversational analytics” (think call center metrics) and conversational assistants, landed $400 million in February. Meanwhile, Highspot, whose AI-powered platform provides sales reps and marketers with real-time and data-driven recommendations, nabbed $248 million in January.

Investors may well chase safer bets like automating analysis of customer complaints or generating sales leads, even if these aren’t as “sexy” as generative AI. That’s not to suggest there won’t be big attention-grabbing investments, but they’ll be reserved for players with clout.

What to expect from AI in 2023 by Kyle Wiggers originally published on TechCrunch



source https://techcrunch.com/?p=2458598

Saturday, December 24, 2022

This year in tech felt like a simulation

Discover the best digital marketing tools, normally $27 as my free gift. SEO|Success

This year in tech, too much happened and very little of it made sense. It was like we were being controlled by a random number generator that would dictate the whims of the tech industry, leading to multiple “biggest news stories of the year” happening over the course of a month, all completely disconnected from one another.

I can’t stop thinking about a very good tweet I saw last month, which encapsulated the absurdity of the year — it was something along the lines of, “Meta laid off 11,000 people and it’s only the third biggest tech story of the week.” Normally, a social media giant laying off 13% of its workforce would easily be the week’s top story, but this was the moment when FTX went bankrupt and everyone was impersonating corporations on Twitter because somehow Elon Musk didn’t think through how things would go horribly wrong if anyone could buy a blue check. Oh, good times.

When I say it feels like we’re living in a simulation, what I mean is that sometimes, I hear about the latest tech news and feel like someone threw some words in a hat, picked a few, and tried to connect the dots. Of course, that’s not what’s really happening. But in January, would you have believed me if I told you that Twitter owner Elon Musk polled users to decide that he would unban Donald Trump?

These absurd events in tech have consequences. Crypto collapses like FTX’s bankruptcy and the UST scandal have harmed actual people who invested significant sums of money into something that they believed to be a good investment. It’s funny to think about how you’d react ten years ago if someone told you that Meta (oh yeah, that’s what Facebook is called now) is losing billions of dollars every quarter to build virtual reality technology that no one seems to want. But those management decisions are not a joke for the employees who lost their jobs because of those choices.

Where does this leave us? We’re in a moment in tech history where nothing is too absurd to be possible. That’s both inspiring and horrifying. It’s possible for a team of Amazon fulfillment center workers in Staten Island to win a union election, successfully advocating for themselves in the face of tremendous adversity. It’s also possible for Elon Musk to buy Twitter for $44 billion.

AI technology like Stable Diffusion and ChatGPT encapsulate this fragile balance between innovation and horror. You can make beautiful artworks in seconds, and you can also endanger the livelihoods of working artists. You can ask an AI chatbot to teach you about history, but there’s no way to know if its response is factually accurate (unless you do further research, in which case, you could’ve just done your own research to begin with).

But perhaps part of the reason why AI generators have garnered such mainstream appeal is that they almost feel natural to us. This year’s tech news feels so bizarre that they might as well have been generated by ChatGPT.

Or maybe reality is actually stranger than anything an AI could come up with. I asked ChatGPT to write some headlines about tech news for me, and it came up with these snoozers (in addition to some factually inaccurate headlines, which I omitted for the sake of journalism):

  • “Apple’s iOS 15 update brings major improvements to iPhones and iPads”
  • “Amazon’s new line of autonomous delivery robots causes controversy”
  • “Intel announces new line of processors with advanced security features”

Pretty boring! Here are some actual real things that happened in tech this year:

  • Tony the Tiger made his debut as a VTuber.
  • Someone claimed to be a laid off Twitter employee named Rahul Ligma, and a herd of reporters did not get the joke, inadvertently meaning that I had to explain the “ligma” joke on like four different tech podcasts.
  • Three people got arrested for operating a Club Penguin clone.
  • One of the Department of Justice’s main suspects in a $3.6 billion crypto money laundering scheme is an entrepreneur-slash-rapper named Razzlekhan.
  • The new Pokémon game has a line of dialogue with the word “cheugy.”
  • Donald Trump dropped an NFT collection.
  • A bad Twitter feature update impacted the stock of a pharmaceutical company.
  • Elon Musk’s greatest rival is a University of Central Florida sophomore.
  • FTC chair Lina Khan said that Taylor Swift did more to educate Gen Z about antitrust law than she ever could.
  • Meta is selling a $1,499 VR headset to be used for remote work.
  • The UK Treasury made a Discord account to share public announcements but was immediately spammed with people using emoji reactions to make dirty jokes (and speaking of the UK, there have been three different Prime Ministers since September.)

These are strange times. If the rules are made up and the points don’t matter, let’s at least hope that if the absurdity continues into 2023, the tech news is more amusing than harmful. I want more Chris Pratt voicing live action Mario, and fewer tech CEOs being sentenced for fraud. Is that too much to ask?

This year in tech felt like a simulation by Amanda Silberling originally published on TechCrunch



source https://techcrunch.com/?p=2462190

Monday, December 19, 2022

Three counterintuitive 2023 predictions about Musk, SFB and even Kraft

Discover the best digital marketing tools, normally $27 as my free gift. SEO|Success

Bradley Tusk — who spent his early career in Democratic politics and later became a consultant and lobbyist for private companies battling regulators — spends much of his time these days as a venture capitalist. But while Tusk is a generalist, he insists he isn’t interested in just any startup; his expertise, he says, is at the intersection of tech and regulation, and his firm adds the most value to startups in sectors where changing regulations are bound to alter the scale of the opportunity they are chasing.

As a service to Tusk Ventures’s current portfolio — and a kind of calling card for potential founders — Tusk every year puts together some thoughts about the changes he sees coming over the next 12-month period. Because he’s often proven right in retrospect, we hopped on a call with him late last week to discuss some of his many 2023 predictions, and these three stood out to us in particular, so we thought we’d share them here.

1) Major CPG brands start selling cannabis products, wiping out a lot of cannabis startups that were operating in the relative shadows. Here Tusk is, discussing why:

Big brands [sell] alcohol all of the time and cannabis, many people would argue, is a less harmful substance than alcohol. We’ve got this real disconnect between the close to two-thirds of the states and the federal government, where cannabis is legal recreationally and medicinally. Yet it’s on Schedule 1 at the DEA [along with] heroin and meth and cocaine . . . which really doesn’t make a lot of sense, especially as states keep legalizing it entirely.

President Biden has said, ‘Let’s remove this from Schedule 1.’ Once that happens all of a sudden all kinds of interstate commerce that so far has not been allowed will open up. So you’ll be able to have real banking, trucking of [plants] across state lines, advertising . . . All the things that a normal, really big company — a Kraft or Unilever and Anheuser-Busch or Philip Morris — might engage in, they can’t really do under the current system, but once the federal restrictions are loosened, then all of a sudden it opens up for them.

One [question I’ve asked cannabis founders over the years is] how are they going to compete with Unilever? Why would Unilever choose to buy them as opposed to just burying them? And most of the time, the answer is they can’t [compete]. They’re really just racing against the clock, hoping the federal government doesn’t actually do the right thing. But I think once cannabis goes off Schedule 1, and I don’t know if it happens in six months or two years, big companies will get into the game [because] there’s money to be made. And a lot of cannabis startups that were highly valued or overvalued or that traded at really high multiples on the Canadian stock exchange are going to feel a lot of pain.

2) Instead of drive further crypto regulation, Sam Bankman-Fried and the abrupt implosion of FTX actually winds up playing a minor role in any new regulations that get enacted (though Tusk does think we’ll see more regulation at the state and federal level in the next 12 months). Here’s Tusk:

When the FTX thing blow-up started happening, my take was, ‘Okay, this is going to lead to a lot of very harsh crypto regulation that will be bad for the sector, because SEC chief Gary Gensler has been pushing for this for a long time and it hasn’t happened yet because crypto is very popular among a lot of actual real people.’ I thought FTX would give him the cover to move very aggressively against the industry as a whole.

In a weird way since then, as the story gets crazier and crazier and just more and more like Sam Bankman-Fried was just a criminal mastermind who was defrauding people out of tens of billions of dollars and [that this debacle] is not something specifically related to crypto per se, it actually shifts the argument again. It [shifts from], ‘This whole industry is out of control’ to ‘this person was out of control.’ It’s almost gotten so extreme that it’s actually helping [tamp down talk of overregulation].

3) Twitter ends up costing Musk far more than the $44 billion he and his investors paid for it . . .

What Musk did is consistent with things that we’re seeing across the cultural zeitgeist right now, which is in this world with 24/7 media coverage and social media activity, the people who really need attention and can’t get enough of it just have to keep doing more and more outrageous things to try to get it right. We saw that with Donald Trump. We saw that with Kanye West. And the main reason why Musk bought Twitter is so that people would be talking about him, just as we are right now. From that standpoint, I suspect he’s achieved his goal.

What worries me for him is when you look at the market cap of Tesla, for example, it is significantly higher than Toyota or General Motors, companies that sell a lot more cars. Tesla makes a great car and they’re growing and it’s okay for them to lean into the future. But the differential between what [Tesla] probably should be valued at and where it is valued is that Elon Musk hype and pixie dust. He managed to create such an image of being so far in the future and so much better than everyone else that really drives retail investment in the stock. The same is true of SpaceX. While that’s still a private company, I saw a piece yesterday saying that it’s now valued at $140 billion, [yet] there’s no way SpaceX could be [worth] $140 billion given its revenue. So his genius in some ways is that he manages to create this perception that what he’s doing is so innovative and so unique, and that only he can do it; it drives tremendous amounts of value and investment toward his companies.

The really big risk with Twitter is that every time he does something really high profile and public, he puts that reputation on the line. He has taken over Twitter, which no one has really ever figured out how to make it a successful business, and now it’s in his hands. And so far, the ideas that he’s put out there don’t sound that new or interesting to me; they feel like variations of things that people have already done before in different ways. And if he does not succeed with Twitter, the question is, does it puncture the balloon for Tesla, and SpaceX and all his other projects? He may have paid $44 billion for Twitter, but ultimately, this could cost him $100 billion or more if there’s a risk that Tesla and SpaceX and other companies that he owns lose value because he’s exposed as being a mere mortal.

 . . . and no, it doesn’t create great opportunities for startups looking to capitalize on the chaos at Twitter, per Tusk. More here:

There’s just not a great revenue model for all of this to begin with. To make matters worse for them, I still think that there’s a risk eventually that Section 230 of the Telecommunications Decency Act does get changed or repealed. Right now, it exempts platforms from liability from content posted by the user, so I can defame you on Twitter, and you could sue me personally but you couldn’t sue Twitter. And as a result, Twitter, Facebook, all the platforms, their real economic incentive is to move toward negative and toxic content, because as much as we hate it, that drives eyeballs and drives clicks and thus drives advertising rates and revenue. So effectively, the lack of liability by the platforms is creating a world where the internet has to be as toxic and awful as possible.

But if [we repeal] Section 230, it’ll be a lot like what happened with the tobacco companies beginning in the 1980s, where all of a sudden they were vulnerable to litigation and started receiving these multibillion-dollar judgments, and as a result, they felt real economic pain and had to finally get a hold of their [marketing practices] because it was costing them more money than otherwise. Right now Facebook will pay the little fines that it gets from the FCC, because ultimately, they make so much money driven by negative content. Repealing Section 230 would change that.

Three counterintuitive 2023 predictions about Musk, SFB and even Kraft by Connie Loizos originally published on TechCrunch



source https://techcrunch.com/?p=2460648

Sunday, December 18, 2022

How to Find X When Your Product + X = Success

Discover the best digital marketing tools, normally $27 as my free gift. AUDIT|Upgrade

\

Exploring a Product’s Path to Success

Ahhh 😌. Wouldn't it be nice if there was a world with short instant-success business tips? Yes, that one, the same world with free housing and magic carpet rides. Unfortunately, there is no such world, and unfortunately, there is no TLDR for successful product management 🤗. This is a long read. Hopefully though, like the Harry Potter novels it'll be so good you wouldn't want it to end 😁.

\
Picture this. You have a product. You're most likely either a founder, if that's you, by the way, congratulations on beginning the adventure that is entrepreneurship, or, you're someone who likes a challenge, working in a company where your task is bringing growth. Brilliant. Either way, you have a product. The next step is crucial. As you may imagine from the contrasting fortunes of Google's Android going from launch to success and Microsoft's Windows Phone going from launch to failure, the path to either is not always obvious. In this article, I’ll list some theories that would help you understand what’s best.

Symbiosis

Photo by Rishabh Dharmani (rishabhdharmani) on Unsplash

\
One avenue worth exploring is finding a symbiotic relationship with another business. An example could be Company A offering a bonus item or discount code with the purchase of an item from Company B. Such as Spotify offering premium membership codes inside cans of Pringles. Ideally, look for companies of similar size or companies that are keenly interested in efforts to boost their sales. This would increase the likelihood that such a deal could be done without requiring a cash payment. Companies are also likely to accept deals that require minimal to no effort on their part. This also has the added benefit of effectively hedging a bet on marketing methods, as your product would benefit from all the marketing methods used by your partner companies too. It's essential that partners are chosen carefully so as to avoid detrimental brand association.

Once You Get to Know Me

Photo by Jonas Stolle (jostolle) on Unsplash

\
It is very helpful to know how your target customers feel about your product. After all, customer sentiment is usually primarily responsible for customers picking products, or not. Surveys are your friend here. Directly speaking to customers is even better. During surveys of your target market, learning from those who have heard of your products and chose not to use it could also be as helpful as those who use your product. For example, knowing what to remove could be as important as knowing what to add.

\

Be Careful

Photo by Justin Chrn(justinchrn) on Unsplash

\
Being the first in a new industry that proves to be successful in the long run could be a great growth strategy. For example, being the first website or YouTube channel to host tutorials on a new programming tool could lead to prominence in that field.

\
An alternate example is a programmer who discovers a new platform, e.g Windows Phone at its time of launch. As a developer, putting a lot of resources into a young platform may mean great exposure at a time of little competition. It could also mean wasted efforts on a platform that ends up in failure.

\
There are some growth opportunities that aren't particularly clear as to whether they are a good idea or not. There isn't a particular magic formula for business success and it is in instances like this that careful analysis and planning are required. A case is the game streaming platform Mixer by Microsoft. It launched as a competitor to the more popular platform Twitch. In the early days, two successful categories on Mixer were new Mixer creators who rose to prominence through the less crowded platform and prominent creators who were paid a lot of money to leave Twitch and use Mixer exclusively. This meant that even though Mixer eventually shut down a few years later, for some, their time on Mixer was well worth it. The larger creators even though returning to reduced fan bases on Twitch, had the benefit of the Mixer signup bonus to make the exercise a possible net benefit. Smaller creators who could not carry over their followers made on Mixer to other platforms and had to start over with much smaller fan bases may not feel so. However, the argument could be made that those smaller creators would not have gotten their fame if they had started on Twitch. The lesson here is that it's not always a simple decision of committing or not if you're chasing maximum benefit. Sometimes the decision needs to be a careful consideration of the entry point and exit strategy. \n

Care should be taken on growth strategies that alter the product significantly. An example could be taken from the book Growth Hacking, where author Shawn Ellis described how raising prices at his startup, Qualaroo, led to a 400% increase in revenue. Apparently, the higher-priced product gave the product an appearance of quality. Though this may have led to a loss of smaller customers, this was more than offset by the gain in large corporate customers. In other words, they essentially switched out their customers. It would be up to the individuals to determine whether they want to alter their product focus as such. \n

Whenever you receive information or advice, it's a good idea to examine its basis. For example, the book, Hacking Growth (published in 2017), cited a 2005 study on software to share the belief that companies should not release too many features in a product at once. Herein lies a problem. 2005 was a long time ago. No social media, no smartphones, The PlayStation 2 was a current-gen console. It also followed up by talking about a presentation that showed Microsoft Word with every toolbar enabled. This could be seen as an over-generalization and misrepresentation of a product leading to a faulty conclusion. The idea was never to utilize MS Word with every toolbar enabled. How features are presented and enabled is what matters, not how many they are. The focus should be on giving users the things they need.

Don't Be Evil

Photo by Paulette Vautour (vautourp) on Unspalsh

\
Be careful of growth strategies that come at a non-obvious cost to the company. An example would be the case of the pharmaceutical company Mylan (now Viatris after a merger). They manufacture the product EpiPen, a first aid drug for allergic reactions, which they acquired in 2007.

\
At that time the price was 100 USD for a two-pack. By 2009 the price was about $103.50. A series of steep price increases raised the price to about $609 in May 2016. All this occurred during a period that saw less than 4% annual inflation in the US.

\
Through a combination of marketing,  lobbying, and price hikes, the $200 million a year income of EpiPen rose to $1.5 billion by May 2015.

\
Now from a mathematical point of view, this represents fantastic growth. However, revenue was not the only thing Mylan had grown. It had also grown resentment.

\
Eventually, push-back came, in the form of a class-action lawsuit settled for $264 million. Now some of you may have noticed that $0.264 billion once is a lot less than $1.5 billion per year(though revenue not profit). Viatris is not the first in history to use this 'cheaper to do the crime' model of business, but it does come at a reputational cost. \n

Another example is the company FaceBook (now Meta). For many, its unethical data and business practices are legendary. These helped it grow its advertising revenue as well as its product addictiveness. After breaking a previous FTC settlement on data handling that helped achieve high growth. Facebook was fined $5 Billion by the FTC. The prior quarter saw revenues of over three times that amount. Once again a fine, but not exactly a calamity. \n

Like Mylan, Facebook had its users over a barrel with an effective monopoly. They had essentially utilized the 'drug dealer model' and now users were so dependent on them. It was hard to regulate them effectively. \n

However, if you push them hard enough, just like The North, consumers will remember. While Viatris and Meta are still profitable companies today, their reputations have taken a solid dent.

\
Facebook will have realized this when they launched their ill-fated cryptocurrency project ~~Libra~~ Diem. Almost immediately the pushback was significant. Even they were aware of how low the stock of their reputation was, they prominently promoted it as the effort of a consortium. They were not successful in swaying public opinion enough. Partners began to drop from the project and even a rebranding exercise was not enough to save it. \n

Another large tech company with a similar reputation problem is Google. In November 2019 Google launched Stadia, a cloud gaming service. It was immediately hit with a wave of pessimism. With it's reputation of shuttering products, see killedbygoogle.com, people didn't trust paying for games that would be locked to Google's control, but most of all they didn't trust Google to keep the service running for longer than a few years. Google sought to combat those fears with campaigns but pessimism remained. Well, if it walks like a duck, looks like a duck and quacks like a duck, it is very likely, a duck. In late September 2022 Google announced it was shutting Stadia down. A rather simple prophecy had been fulfilled. Part of the reason Google might have shut down Stadia was because they did not see a strong enough user adoption. Ironically enough, a problem that came about because people did not trust Google to not shut Stadia down prematurely. Coupled with its privacy concerns, Android monopoly (see story)) and app store fees, Google was now company with a growing reputational problem. Google had a phrase, "Don’t Be Evil" during their early days. They have since dropped it.

\
\

It's Not Thaaat Hard

Photo by Chris Liverani (chrisliverani) on Unsplash

\
Some growth hacks are 'common sense'. I put that in quotes because perspective and background really do matter. An example was FaceBook discovering that a reduced data version of the FaceBook app, FaceBook Lite, was able to spur growth in developing countries. An example: India, Facebook's largest market by user count, has a median monthly wage of about $200. The median monthly wage in the US is $54,000. People with low-cap data plans would therefore be more likely to prefer low-data usage apps. As data would be relatively more valuable to them due to its cost. \n

Another example is the Microsoft Xbox One launch 'drama'. Microsoft initially planned to require the Xbox One to connect to Microsoft every hour to retain the ability to play games. The reaction to this, along with the removal of the ability to play used games, was brutal. It was so bad that Microsoft backtracked on the idea. One theory was that people in Redmond (Microsoft HQ location) were so used to their constant and easy access to the internet that they subconsciously assumed their customers' lives were that connected as well.

\
Having a good product design overview helps to avoid scenarios such as these. Pay attention to your customers before you design the product.

\
\

Data, Data, Data

Photo by Alexander Sinn (swimstaralex) on Unsplash

\
Do data analysis of your product usage. How are people using it? Could a small feature be in fact the most used part of your product? If it's a shopping app, on what screen do most of your users quit using the app? Is there a problem there? \n

Information gathered here could help you decide if changes are needed. However, be careful when analyzing data. Causation is not always equal to correlation. \n

Also, know your users and their types. For example, there are many people who spend hours a day scrolling through social media but do not post anything. There are others who post but may spend less than an hour a day online. Both categories are useful. Ideally do not try to solve the 'problem' of people spending less time using your app if that is not a problem for that category of users. \n

One of the most important but also difficult tools to get is direct feedback from your users. Feedback from your users could help you plan marketing campaigns by knowing what to highlight and who to target. Feedback could also help you know what new features to implement. \n

When you are testing new features in your product, it is advisable to introduce them gradually and test them as you do. As opposed to infrequent large changes. This would help avoid building on a mistake.

\
\

Quality x Product

Photo by Towfiqu barbhuiya (towfiqu999999) on Unsplash

\
Here is a question many entrepreneurs need to ask themselves before starting work on a project: \n

Should your product exist? \n

Struggling for a product to be used is not a great idea. If your product falls under the category of frivolous. It might be best to seriously consider if putting time and money into it is a good idea. \n

Your initial market may not be your whole target market. Facebook for example started via targeting university students in the US. Consider what to change in your strategy and product if you’re trying to drive growth to a larger market. \n

If a pivot is likely, try to do so as early as possible. This would help avoid spending a lot of resources marketing a product only to change it significantly not long after. There are cases, of course, like Netflix that pivoted while they were well into being a mature company. But new companies may not always be able to financially afford a rebranding.

\
On quality. Lots of mobile app developers are now aware that notification reminders to use an app can help drive engagement. However there is a risk of notification fatigue among users. Ideally only send notifications to users when it is of clear use to them e.g. an item they were seeking is back in stock. There is a risk that your app would be deleted entirely if it's seen to pester.

\
A strong first impression is key. There are lots of mobile apps that are opened only once and then forgotten. Similarly there are lots of websites that users leave after only checking the page they arrived on. Impressing your audience at first attempt is key, because even if not all become your customers, this impression can lead to your product being spread via word of mouth.

\
\

Organics

Photo by Markus Spiske (markusspiske) on Unsplash

\
Ahhh, the dream of creating a product with viral growth. There's nothing quite like watching a product launch and do the marketing job for you. I can only imagine it's the same feeling tired new parents might get if their toddler suddenly started taking care of him/herself.

\
And as it happens, unlike toddlers, products do sometimes launch to marketing success of magical proportions. Whether intentionally or not. However viral growth can work both ways, and once something goes viral, it's hard to keep a handle on.

\
Take the launch of the video game Fallout 76 for example. A much-anticipated game from a franchise with millions of fans. However, the game was launched with a litany of problems, from mistakes in design to bad choices in management. The fans were not pleased. The ensuing protest that spread over the Internet like wildfire was, strong. The result was that even people, like me, who had never paid any attention to the game's launch heard about its troubles. Ladies and gentlemen, that, is not how we would like to launch a product.

\
The moral of that story is that in creating and launching a product, care should be taken to create something worth spreading organically. \n

It is worth remembering in this day anddigital marketing = digital marketing, that a real world exists. Yes, large parts of that real-world might involve people's heads glued to their phone screens, but word of mouth is still a powerful organic growth tool (organic growth in this case refers to growth obtained without paid marketing efforts). In-person conversations are likely a more convincing medium than online messages. So whether it's by friends or sales representatives, remember the offline channel.

\
When you are online and seek to spread via word of mouth, consider the point of view that the recommendation is framed. For example, Friend A sending a message to Friend B about a product is arguably better than the company taking the email address of Friend B, and sending a generalized advertisement message. Companies could give message design tools to customers, via a web form, for example, to help them send bright, well-designed, but still personalized messages. \n

Obtaining new customers via your existing ones is an effective method. Sometimes incentivizing your customers via offering referral programs where customers get some benefit is worth considering.

\
\

Careful Now

Photo by Burst (burst) on Unsplash

\
Realistically, no one is a perfect business expert. So assess all the advice you receive. Yes, even this one.

\

==Cover Photo by Kristopher Roller on Unsplash==



source https://hackernoon.com/how-to-find-x-when-your-product-x-success?source=rss

Sunday, December 11, 2022

Twitter Blue to relaunch with actual verification process, higher price for Apple users

Discover the best digital marketing tools, normally $27 as my free gift. SEO|Upgrade

Twitter is officially bringing back the Twitter Blue subscription Monday, starting in five countries before rapidly expanding to others, according to Esther Crawford, director of product management at Twitter. Web sign-ups will cost $8 per month and iOS sign ups will cost $11 per month for “access to subscriber-only features, including the blue checkmark,” per a tweet from the company account.

Android users can purchase on the web and use their subscription on their phones, said Crawford. The higher cost for iOS sign ups might be a move by Twitter to offset the cost of Apple’s 30% commission for in-app purchased subscriptions, or simply to deter users from subscribing through the Apple Store at all, following a Twitter storm from an angry Elon Musk over allegations that Apple was cutting advertising on the platform.

Twitter had previously attempted to democratize the prestige of the blue checkmark — once used for verifying trustworthy and noteworthy accounts — by making it available to anyone willing to shell out $8 per month, verification be damned. The result was a slew of users buying a checkmark to impersonate other accounts and generally cause mischief. (See: Fake-pharma company Eli Lilly tweeting that insulin is now free and fake-Tesla tweeting, “Our cars do not respect school zone speed limits. Fuck them kids.”)

Crawford tweeted over the weekend that Twitter has now added a review step before applying a blue checkmark to an account in order to combat impersonation, which she says is against the Twitter Rules.

With the relaunch of Twitter’s subscription offering, the social media platform will further color-code timelines by introducing gold checkmarks for businesses and, soon, gray checkmarks for government and “multilateral accounts,” whatever those are.

“Businesses who previously had relationships with Twitter will receive goldchecks on Monday,” tweeted Crawford. “We will soon open this up to more businesses via a new process.”

Because Twitter is still really testing this feature out, the company warned that subscribers who change their handle, display name or profile photo will temporarily lose the blue checkmark until their account is reviewed again.

Subscribers will be able to edit their tweets, upload 1080p videos and have access to reader mode, alongside their blue checkmarks, the company said. They’ll also have their tweets “rocketed” to the top of replies, mentions and search and will be spammed with 50% fewer ads.

Twitter Blue to relaunch with actual verification process, higher price for Apple users by Rebecca Bellan originally published on TechCrunch



source https://techcrunch.com/?p=2456379

Thursday, December 08, 2022

Slack’s new CEO, Lidiane Jones, brings two decades of product experience to the job

Discover the best digital marketing tools, normally $27 as my free gift. FREE|Upgrade

We’ve heard an awful lot over the past couple of weeks about the executives who are leaving Salesforce, but not a heck of a lot about the woman who is taking over for Stewart Butterfield as CEO at Slack when he takes off to spend some time gardening. It’s time we changed that.

Her name is Lidiane Jones, a woman with a deep background in enterprise software. (I requested an interview with Jones for this piece, but the company was not making her available to speak with the press.) Surprisingly, many of the analysts I confer with about Salesforce knew little about her, but that could be because she just hasn’t been made available on analysts’ days.

That will likely change when she takes over at the end of next month.

But she didn’t come out of nowhere. Jones, who lives in the Boston area, has been at Salesforce for three years and quickly rose up the ranks: She started as head of product for Commerce Cloud, then was bumped up to GM of Commerce Cloud before — prior to her promotion this week — holding the title of GM of Commerce Cloud, Marketing Cloud and Experience Cloud, which basically encompasses the company’s entire B2C business.

Before that, she spent 13 years at Microsoft working on a variety of products, from Microsoft Excel and Microsoft Project to Enterprise Application Virtualization, Office Collaboration and finally Azure Machine Learning.

She also spent almost four years at Sonos as VP of product management. Her unique mix of enterprise and consumer experience should prepare her well for her new job running Slack, where she will have to walk a fine line between user experience and enterprise requirements.

In Butterfield’s farewell Slack announcement, made available to TechCrunch by sources earlier this week (was it only this week?), he effusively praised his replacement. While he could be trying to sell her to a skeptical group used to his decade of steady leadership, it sounds like he also genuinely likes her:

So, about this Lidiane. You’re going to love her. She’s pragmatic and practical, insightful, passionate, creative, kind, and curious. She’s right at that little diamond-shaped heart in the four-circle Venn diagram of Smart, Humble, Hardworking, and Collaborative. Before Salesforce she spent four years leading product at Sonos where she fell in love with Slack. She has a deep respect for our approach to product, our customer obsession, and our unique culture. She’s one of us.

That’s a pretty strong welcome, and Anand Thaker, a marketing technology advisor and the founder of several startups who follows Salesforce closely, also believes that she’s a good fit for Slack.

“She has a solid technical and management background, and the projects and groups she has been working on within Salesforce — experience, marketing, commerce — were all places Slack would fit and drive the best value. Each of these has strong consumer commerce elements where the larger growth (or less churn) will likely come and is in line (reading the tea leaves) with where Benioff has wanted Salesforce to go,” Thaker told TechCrunch.

Butterfield added that Jones’s roles inside Salesforce will make her a strong voice for Slack inside the larger organization, which could come in handy as the leadership handover occurs.

Alan Pelz-Sharpe, founder and principal analyst at the firm Deep Analysis, said that in many ways, she is better prepared for this job than some longtime CEOs.

“I don’t know Lidiane personally, but she seems the logical option as she seemed to do a good job running the Marketing, Experience, Commerce Clouds, and running those is not much different from running multiple large businesses, so ironically she has more true CEO experience as a first-time CEO than many experienced CEOs. Plus she was with Microsoft a long time — and might bring some of their rigor to the table,” he said.

Jones certainly has big shoes to fill, taking over for a founder-CEO in the midst of a big transition for the company, but with a couple of decades of tech experience behind her, she seems more than prepared for the challenge.

Slack’s new CEO, Lidiane Jones, brings two decades of product experience to the job by Ron Miller originally published on TechCrunch



source https://techcrunch.com/?p=2455250

Wednesday, December 07, 2022

Facebook and Anti-Abortion Clinics Have Your Info

Discover the best digital marketing tools, normally $27 as my free gift. SEO|Success

This article is a collaboration with Reveal from The Center for Investigative Reporting.

\
Facebook is collecting ultrasensitive personal data about abortion seekers and enabling anti-abortion organizations to use that data as a tool to target and influence people online, in violation of its own policies and promises.

\
In the wake of a leaked Supreme Court opinion signaling the likely end of nationwide abortion protections, privacy experts are sounding alarms about all the ways people’s data trails could be used against them if some states criminalize abortion.

\
A joint investigation by Reveal from The Center for Investigative Reporting and The Markup found that the world’s largest social media platform is already collecting data about people who visit the websites of hundreds of crisis pregnancy centers, which are quasi-health clinics, mostly run by religiously aligned organizations whose mission is to persuade people to choose an option other than abortion.

\
Meta, Facebook’s parent company, prohibits websites and apps that use Facebook’s advertising technology from sending Facebook “sexual and reproductive health” data.

\
After investigations by The Wall Street Journal in 2019 and New York state regulators in 2021, the social media giant created a machine-learning system to help detect sensitive health data and blocked data that contained any of 70,000 health-related terms.

\
But Reveal and The Markup have found Facebook’s code on the websites of hundreds of anti-abortion clinics.

\
Using Blacklight, a Markup tool that detects cookies, keyloggers, and other types of user-tracking technology on websites, Reveal analyzed the sites of nearly 2,500 crisis pregnancy centers—with data provided by the University of Georgia—and found that at least 294 shared visitor information with Facebook.

\
In many cases, the information was extremely sensitive—for example, whether a person was considering abortion or looking to get a pregnancy test or emergency contraceptives.

\
In a statement to Reveal and The Markup, Facebook spokesperson Dale Hogan said, “It is against our policies for websites and apps to send sensitive information about people through our Business Tools,” which includes its advertising technology.

\
“Our system is designed to filter out potentially sensitive data it detects, and we work to educate advertisers on how to properly set up our Business Tools.”

\
Facebook declined to answer detailed questions about its filtering systems and policies on data from crisis pregnancy centers.

\
It’s unknown whether the filters caught any of the data, but our investigation showed a significant amount made its way to Facebook.

\
Credit:https://ift.tt/kHSonON

\
Caption: Using Meta’s Privacy Center, we found that Facebook captured Reveal reporter Grace Oldham requesting an appointment at the Pregnancy Resource Center of Owasso in Oklahoma.

\
More than a third of the websites sent data to Facebook when someone made an appointment for an “abortion consultation” or “pre-termination screening.” And at least 39 sites sent Facebook details such as the person’s name, email address, or phone number.

\
Facebook takes in data from crisis pregnancy centers through a tracking tool called the Meta Pixel that works whether or not a person is logged in to their Facebook account.

\
The pixel is largely an advertising tool that allows businesses to do things like buy Facebook ads targeted to people who have visited their website or to people who share similar interests or demographics with their site’s other visitors.

\
This is a mostly automated process in which the business does not have access to information about the specific users being targeted. It’s not clear how this data is later used.

\
Crisis pregnancy centers and other businesses can choose whether to install pixel on their websites, though many website builders and third-party services automatically embed trackers.

\
In 2020, The Markup found that 30 percent of the 80,000 most popular sites use the ad tracker, and Facebook has said millions of pixels are on websites across the internet.

\
Facebook says pixel data can be stored for years.

\
That personal data can be used in a number of ways. The centers can deliver targeted advertising, on Facebook or elsewhere, aimed at deterring an individual from getting an abortion.

\
It can be used to build anti-abortion ad campaigns—and spread misinformation about reproductive health—targeted at people with similar demographics and interests.

\
And, in the worst-case scenario now contemplated by privacy experts, that digital trail might even be used as evidence against abortion seekers in states where the procedure is outlawed.

\
“I think this is going to be a wake-up call for millions of Americans about how much danger this tracking puts them in when laws change and people can weaponize these systems in ways that once seemed impossible,” said Albert Fox Cahn, founder and executive director of the New York–based Surveillance Technology Oversight Project.

\
Facebook and crisis pregnancy centers “are operating with virtually no rules,” he said.

\
Facebook has policies and filters that are supposed to block sensitive personal data. But the platform’s filters have often proved to be porous against the vast amount of information they take in every day.

\
Essentially, that means the company is putting the onus on its advertising clients to monitor themselves.

\
And Facebook does not have an incentive to crack down on violations of its advertising policies, said Serge Egelman, research director of the Usable Security & Privacy Group at UC Berkeley’s International Computer Science Institute.

\
“That costs them money to do. As long as they’re not legally obligated to do so, why would they expend any resources to fix this?”

Using Data to Make Abortion “Unthinkable”

Crisis pregnancy centers market themselves as being in the “pregnancy resource” business, offering a range of free or low-cost services from pregnancy tests to baby clothing and “options consultations.”

\
But their mission, articulated by Heartbeat International, the largest crisis pregnancy center network in the world, is far more sweeping: “to make abortion unwanted today and unthinkable for future generations.”

\
Although many centers resemble medical clinics, the majority are not licensed medical facilities.

\
Thus, most are not required to follow most privacy protections against the sharing of personal health information, including the federal Health Insurance Portability and Accountability Act, or HIPAA.

\
In recent years, crisis pregnancy centers have become increasingly savvy about targeting people using sophisticated digital tools and infrastructure.

\
Heartbeat International, for example, has developed suites of products to help individual centers improve their online presence, digital advertising, and data management.

\
These online tools enable the centers to amass highly personal information, including medical histories, details about prior pregnancies, and even ultrasound photos, and store and share that information with networks of anti-abortion partners.

\
As Heartbeat International says on its webpage marketing its data management system: “Big data is revolutionizing all sorts of industries. Why shouldn’t it do the same for a critical ministry like ours?”

\
When asked about Heartbeat International’s data-sharing practices, spokesperson Andrea Trudden said, “Heartbeat International encourages all pregnancy help organizations to utilize a variety of marketing to reach those seeking pregnancy help.”

\
But, she said, “we do not require affiliates to provide such details to us.”

\
Crisis pregnancy centers also have been documented as spreading false or misleading information about abortion, contraceptives, and other reproductive health topics, including on Facebook.

\
In 2021, the Center for Countering Digital Hate found that Facebook showed ads promoting an unproven medical procedure known as abortion pill reversal as many as 18.4 million times.

\
Many of those advertisements were linked to Heartbeat International’s Abortion Pill Rescue Network project, which did not respond to a request for comment.

How We Tracked the Data

To test how Facebook and crisis pregnancy centers have been using the data the pixel collects, Reveal reporter Grace Oldham created a new Facebook profile in late April solely for this investigation.

\
Then, while logged in to Facebook, she visited the 294 crisis pregnancy center websites that Blacklight found to have a pixel, clicking through each website and, when available, filling out appointment request forms.

\
Oldham conducted the research in a clean browser with a cleared cache.

\
In early May, she and Reveal data reporter Dhruv Mehrotra used Meta’s Privacy Center to download and review the data of the clean Facebook account.

\
They found that Facebook retained data about Oldham’s interactions with 88 percent of those crisis pregnancy center websites, linking her behavior to her Facebook profile.

\
For instance, Facebook knew Oldham had scheduled an appointment with the Pregnancy Resource Center of Owasso, Okla.

\
That state’s Republican governor, Kevin Stitt, signed a law in late May that bans virtually all abortions from the point of fertilization and took effect immediately.

\
The Owasso center did not respond to a request for comment, but after we reached out, Facebook’s tracking pixel was removed from every page on the center’s website.

\
Our analysis found that in states that will ban most or all abortions if Roe v. Wade is overturned, at least 120 crisis pregnancy centers sent data to Facebook about their website visitors.

\
In Tennessee, for example, where the Human Life Protection Act is poised to outlaw abortion statewide, Facebook retained data from Oldham’s interactions with 11 centers. Next Steps Resources in Dunlap sent data to Facebook about every single page Oldham visited on its site.

\
Facebook stored that data and knew that Oldham had submitted an appointment request with the center.

\
Next Steps’ executive director, Debbie Chandler, told Reveal and The Markup that the people she hired to manage her website and marketing disagreed that “any private information was being sent to Facebook.”

\
Credit:Nextstepinfo.org

Caption: An appointment request form on the website of Next Steps Resources, a crisis pregnancy center in Dunlap, Tenn.

\
We also found that anti-abortion marketing companies gained access to some of Oldham’s pixel data, even though she never interacted with their websites.

\
These included Choose Life Marketing, whose website claims to help crisis pregnancy centers develop digital strategies to “reach more abortion-minded women,” and Stories Marketing, a social media marketing company for “pregnancy centers and life-affirming organizations.”

\
Those organizations also added Oldham’s Facebook profile to custom audience groups capable of targeting her and people like her with ads for their services as well as anti-abortion messaging. Choose Life Marketing and Stories Marketing did not respond to requests for comment.

\
In their online materials, the marketing companies explain why Facebook plays such an important role in their digital strategies.

\
“Facebook ads have the highest return on investment (ROI) of any type of online marketing—even twice the ROI of Google Ads,” Stories Marketing says, adding, “Facebook ads can also be placed on Instagram and other apps for free, extending your reach at no extra digital marketing\
According to Choose Life: “Retargeting is an effective method of keeping your center at the forefront of their minds.… This digital marketing method can also help build credibility and trust as women go through the decision-making process because your center’s name becomes familiar to them.”

\
We first ran our analysis in February and repeated it using the same methodology in early May. The results of both analyses were similar. As of Tuesday, Facebook still had data about Oldham’s interactions on crisis pregnancy center websites.

Abortion Data Collection “Ripe for Abuse”

Cahn, of the Surveillance Technology Oversight Project, expressed concerns about how law enforcement agencies could use Facebook data to find people seeking abortions should the procedure become illegal in some states.

\
“It’s ripe for abuse,” he said of Facebook’s data collection. “It seems indefensible to me that we are allowing companies to have so much power to expose our most intimate moments to these platforms and have them use it against us.”

\
In recent years, law enforcement agencies have barraged tech companies like Google and Uber with demands for user data.

\
Often, these legal requests don’t target individual suspects but instead compel the company to divulge data about people in a particular place or searches using specific keywords.

\
According to the most recent data available from Facebook’s Transparency Center, the company received nearly 60,000 government requests for data from July to December 2021 and complied 88 percent of the time.

\
Although crisis pregnancy centers could provide law enforcement with data about anyone who had voluntarily provided personal information, they probably don’t have the technology to disclose specific information about individuals who had merely visited their websites.

\
But Facebook is different. Because the social media company can link activity on a crisis pregnancy center site to an individual’s profile, Facebook is in a much better position to divulge granular data about the center’s website visitors than the center itself.

\
Data from search engine histories played a key role in a 2018 criminal case, in which a Mississippi woman was indicted for second-degree murder after suffering a pregnancy loss at home.

\
The evidence included internet searches the woman had allegedly conducted for how to “buy Misoprostol abortion pill online.” The charges eventually were dropped.

\
“There’s nothing to stop police from using Facebook ad-targeting data the same way they’ve been using Google’s data, as a mass digital dragnet,” Cahn said.

\
Laura Lazaro Cabrera, a legal officer at London-based Privacy International, said that even metadata, like the titles of webpages or URLs, can be revealing.

\
“Think about what you can learn from a URL that says something about scheduling an abortion,” she said.

\
“Facebook is in the business of developing algorithms. They know what sorts of information can act as a proxy for personal data.”

Getting Facebook to Fix the Problem

Privacy experts have been warning for years that Facebook’s laissez-faire attitude toward how clients use its advertising technology is vulnerable to exploitation.

\
After The Wall Street Journal and New York state regulators exposed how the social media behemoth collected sensitive user data from popular health apps that chart everything from heart rates to menstrual cycles, Facebook claimed to have implemented sophisticated filtering mechanisms to detect and prevent it from taking in sensitive health data.

\
According to the Journal, the filters were supposed to block “70,000 terms related to topics such as sexual health and medical conditions.”

\
But our investigation found that Facebook has continued to ingest data from webpages with obvious sexual health information— including ones with URLs that include phrases such as “post-abortion,” “i-think-im-pregnant,” and “abortion-pill.”

\
Despite Facebook’s official policy prohibiting websites from sending it sensitive health information, it’s unclear what, if anything, the platform does to educate its advertising clients about the policy and proactively enforce it.

\
One way for Facebook to prevent anti-abortion organizations from misusing its ad technology would be to strengthen the filters it already has in place or to discontinue the pixel tool entirely. But the reality, said Egelman of UC Berkeley, is that the company’s $115 billion a year in advertising revenue creates a huge financial disincentive to block user information.

\
“This is their business. The more data they get, the more targeted advertising they can do, and that’s the gravy train for them: targeted ads,” he said. “If they’re proactive about cutting off sites like that, it impacts their revenue in multiple ways.”

\
In the absence of Facebook action, Egelman says he thinks that the best fix is public pressure and tough legislation. That’s what happened last year, when critical backlash prompted Meta-owned Instagram to shelve its plans for a kids’ version of its social media software.

\
While no comprehensive federal data privacy legislation currently exists in the United States, a draft of a bill called the American Data Privacy and Protection Act was released in early June that, if passed, could increase the Federal Trade Commission’s power to regulate and enforce how companies can use sensitive health data.

\
Until then, however, it remains up to state legislatures to enact consumer privacy protections.

\
Brandie Nonnecke, founding director of the CITRIS Policy Lab at UC Berkeley, said the European Union is creating stronger protections that would apply through the Digital Services Act.

\
The new guidelines, which are awaiting formal approval by the European Parliament and E.U. Council, will require large online platforms, like Facebook, and search engines to proactively identify ways their systems could be abused and create strategies to prevent that misuse.

\
“We’re not in a place where there is robust enough transparency and accountability on these data ecosystems and how they’re being used,” she said, “and especially the vulnerabilities to individuals.”

By Grace Oldham and Dhruv Mehrotra

\
Byard Duncan and Surya Mattu contributed to this story. It was edited by Nina Martin, Soo Oh, Rina Palta, and Andrew Donohue and copy edited by Nikki Frick.

\
Grace Oldham can be reached at goldham@revealnews.org, and Dhruv Mehrotra can be reached at dmehrotra@revealnews.org. Follow them on Twitter: @gracecoldham and @dmehro.

\
Also published here

\
Photo by Glen Carrie on Unsplash



source https://hackernoon.com/facebook-and-anti-abortion-clinics-have-your-info?source=rss

Saturday, December 03, 2022

Elon Musk vicariously publishes internal emails from Twitter’s Hunter Biden laptop drama

Discover the best digital marketing tools, normally $27 as my free gift. SEO|Upgrade

Elon Musk reminded his followers on Friday that owning Twitter now means he controls every aspect of the company — including what its employees said behind closed doors before he took over.

Earlier this week, Musk teased the release of what he called “The Twitter Files,” declaring that the public “deserves to know what really happened” behind the scenes during Twitter’s decision to stifle a story about Hunter Biden back in 2020.

On Friday evening, Musk delivered, sort of. Twitter’s new owner shared a thread from author and Substack writer Matt Taibbi who is apparently now in possession of the trove of internal documents, which he opted to painstakingly share one tweet at a time, in narrative form.

Taibbi noted on his Substack that he had to “agree to certain conditions” in order to land the story, though he declined to elaborate about what the conditions were. (We’d suspect that sharing the documents in tweet form to boost the platform’s engagement must have been on the list.)

Elon Musk Twitter files

Taibbi’s decision to reveal a selection of the documents one tweet at a time was apparently not painstaking enough. One screenshot, now deleted, published Jack Dorsey’s private personal email address. Another shared an unredacted personal email belonging to Rep. Ro Khanna (D-CA), who expressed concerns about Twitter’s action at the time. Both incidents appear to run afoul of Twitter’s anti-doxing policy.

The documents, which are mostly internal Twitter emails, depict the chaotic situation that led Twitter to censor a New York Post story about Hunter Biden two years ago. In October 2020, The New York Post published a story that cited materials purportedly obtained from a laptop that the younger Biden left at a repair shop. With a presidential election around the corner and 2016’s hacked DNC emails and other Russian election meddling fresh in mind, Twitter decided to limit the story’s reach.

In conversation with members of Twitter’s comms and policy teams, Twitter’s former Head of Trust and Safety Yoel Roth cited the company’s rules about hacked materials and noted the “severe risks and lessons of 2016” that influenced the decision making.

One member of Twitter’s legal team wrote that it was “reasonable” for Twitter to assume that the documents came from a hack, adding that “caution is warranted.” “We simply need more information,” he wrote.

In his Twitter thread, Taibbi characterized the situation to make such a consequential enforcement decision without consulting the company’s CEO as unusual. In reality, then-CEO Jack Dorsey was well known for being hands-off at the company, at times working remotely from a private island in the South Pacific and delegating even high profile decisions to his policy team.

After Twitter acted, the response from outside the company was swift — and included one Democrat, apparently. “… In the heat of a Presidential campaign, restricting dissemination of newspaper articles (even if NY Post is far right) seems like it will invite more backlash than it will do good,” Khanna wrote to a member of Twitter’s policy team.

At the time, Facebook took similar measures. But Twitter was alone in its unprecedented decision to block links to the story, ultimately inciting a firestorm of criticism that the website was putting a thumb on the scale for Democrats. The company, its former CEO and some policy executives have since described the incident as a mistake made out of an over-abundance of caution — a story that checks out in light of the newly published emails.

Musk hyped the release of the emails as a smoking gun, but they mostly tell us what we already knew: that Twitter, fearful of a repeat of 2016, took an unusual moderation step when it probably should have provided context and let the story circulate. Musk has apparently stewed over the issue since at least April when he called the decision to suspend the Post’s account “incredibly inappropriate.”

Files from the laptop would later be verified by other news outlets, but in the story’s early days no one was able to corroborate that the documents were real and not manipulated, including social platforms. “Most of the data obtained by The Post lacks cryptographic features that would help experts make a reliable determination of authenticity, especially in a case where the original computer and its hard drive are not available for forensic examination,” the Washington Post wrote in its own story verifying the emails. The decision inspired Twitter to change its rules around sharing hacked materials.

Twitter’s former Head of Trust and Safety Yoel Roth shared more insight about the decision in an interview earlier this week, noting that the story set off “alarm bells” signaling that it might be a hack and leak campaign by Russian group APT28, also known as Fancy Bear. “Ultimately for me, it didn’t reach a place where I was comfortable removing this content from Twitter,” Roth said.

Dorsey admitted fault at the time in a roundabout way. “Straight blocking of URLs was wrong, and we updated our policy and enforcement to fix,” Dorsey tweeted. “Our goal is to attempt to add context,” he said, adding that now the company could do that by labeling hacked materials.

Musk has been preoccupied with a handful of specific content moderation decisions since before deciding to buy the company. His frustration that Twitter suspended the conservative satire site The Babylon Bee over a transphobic tweet appears to be the reason he even decided to buy Twitter to begin with.

Now two years after it happened, the Hunter Biden social media controversy is still a sore spot for conservatives, right wing media and Twitter’s new ownership. The platform’s past policy controversies are mostly irrelevant now with Musk at the wheel, but he apparently still has an axe to grind with the Twitter of yore — and we’re seeing that unfold in real(ish) time.

Elon Musk vicariously publishes internal emails from Twitter’s Hunter Biden laptop drama by Taylor Hatmaker originally published on TechCrunch



source https://techcrunch.com/?p=2452220