Interested in the Common Cents Lab? Learn more about program operations here.
FREQUENTLY ASKED QUESTIONS
Got questions? We may have answers. Read below for frequently asked questions from people like you. If you’re still curious, scroll to the bottom and find our email.
What is the time commitment required?
While every project is different, we expect each of our partners to prioritize this partnership as one of your projects for 2017. This may include the following activities: Weekly / Biweekly meetings with the CommonCents team for the period prior to and after the intervention. These meetings are typically working sessions that include project planning, design reviews and implementation work. Behavioral Diagnosis of your customer’s journey, as it relates to customer outcomes we’re trying to achieve. To facilitate this diagnosis we will provide you with a data request for key steps in the funnel. Implementing an intervention or experiment that you run with your customers. In order for the experiment to be valid we will run it on a small sample of your customers and measure changes relative to the group of customers who did not receive the intervention. Kick-off behavioral economics workshop to get everyone in the organization excited and interested in this new approach. We need to get a critical mass of the organization excited. (optional)
Who else have you worked with? (Our social proof)
Since January of 2016, CommonCents has partnered with 14 organizations to design and test interventions that help Americans improve their financial health. These current partners include: AARP, Digit, EarnUp, Gusto, Payable, Propel, RetireMap, RobinHood, Credit Union 1, Duke Federal Credit Union, Latino Community Credit Union, Self-Help Credit Union, GreenPath Financial Wellness Over the years the Duke lab has worked closely with many more financial institutions and companies.
How will the intervention be shared?
We work closely with our partners to share our experiment results in an aggregated, anonymized manner. Outside of sharing information for public good, our current partners appreciate and have benefited from the positive press. We understand that sharing information like this may be a sensitive topic. As such, all write ups are subject to the company’s review prior to publication. We can also delay publishing of results, or avoid including the organizations’s name. NOTE: We will never share your proprietary data like revenue or conversion stats publicly. Here are three examples of press that highlights research done in collaboration with partners:
- We did a survey with Payable users and published the findings in WSJ. This story focused on the financial lives of freelancers and includes a descriptive infographic our team created.
- We ran a survey in partnership EarnUp and published the findings in Scientific American. How to reframe savings: we’re more attracted to “earning money” than saving it.
- The Duke lab ran an experiment in Kenya that increased savings rates within a very poor village. Hereis the summary in Wired.
We have a monthly column in Scientific American. In addition, our target press outlets include Forbes, PBS, Business Insider, Wired, and Scientific American.
How long are the engagements?
There is no typical engagement. The length of the engagement depends on the complexity of the intervention,the duration of the intervention and how fast the partner is able to implement the intervention. For fast-moving partners, our engagements typically last 3-6 months. Otherwise, our engagements are set to last 6-12 months.
How many users / members do partner need?
While we don’t have a minimum percent, an organization would need to serve at least 500 LMI households. Additionally, a majority of our partnerships (i.e. 5 out of 8 partners) need a user base of 50,000 members, clients, or users. With all else being equal, organizations with higher percentages of their user base as LMI, along with a mission of serving LMI households, will be given priority.
How much do you study the consumer’s behaviors versus the partnership processes?
We are completely interested in consumer/member/client behavior. It’s likely that we would recommend particular tweaks to one of the partner’s process, we are not evaluating current services. All testing will be on additions or changes that we make to the process in order to change consumer behavior.
What kind of time commitment typically do you ask of your partners as far as staff time?
Time commitment varies by partnership but we generally ask partners to commit to weekly or bi-weekly 30 minute check-ins with 2-4 members from the organization. There will be moments during the project that require more time and moments that require less time. Additionally, most partnerships will also have a site visit that could be anywhere from a couple of days to a week or more. We do expect there will be additional time commitments running up to and during the site visit.
What about early stage startups, without a significant user base. Will these types be considered?
In order to qualify, FinTech partners need to have either series A funding OR a user base of 50,000 or more.
Our Credit Counseling agency has just been doing the research thus far and we don’t have a product/program yet. Does that disqualify us?
In some cases, we love to get in on the ground floor as programs or products are being developed. This generally entails pre-testing different features of the product or program and then launching a field test around enrollment, usage, or outcomes. If you are considering this, please reach out to us to see if this is a good fit.
What types of products are you planning to test in 2017?
We are generally open to testing features within products or services that aim to increase short-term savings, increase long-term savings, decrease debt, manage cash-flow challenges, and/or decrease expenses. What we test is largely determined by the partnership. It is a collaborative partnership with input from the Common Cents team and the partner organization.
Do you want to do a test study before starting the partnership? Can you define better what are the expectations of the partner or how view a partnership?
Partnerships are collaborative. A partnership generally begins with determining the research question or key behavior that BOTH the partner and Common Cents is interested in. After the research question is identified, Common Cents conducts a behavioral diagnostic, which explores the process and why people may or may not do the key behavior. Then Common Cents and the partner ideate on ways to remove barriers, increase ease and motivation, and amplify benefits. The partner often determines what is feasible within their organization, navigates any legal or technical challenges within the organization, and provides logistical support for running the study and collecting and sharing data. Common Cents often takes the lead with the behavioral diagnosis, idea generation, design, experimental design, data analysis.
Would you work with an existing partnership that blends these two type of partnerships? We are a nonprofit building a tool with a fintech partner.
We will evaluate these on a case-by-case basis. Please reach out to Kristen Berman (Kristen@commoncentslab.org) and Mariel Beasley (firstname.lastname@example.org) to discuss this further.
Is it possible to see a document of the application?
If you would like a pdf version, please email Selina (email@example.com).
Could you give us three or four other specific examples of successful and unsuccessful projects at a credit union? (Unsuccessful meaning the test indicated it was not effective)
Successful: 1) We were able to increase usage of Credit Union services among low-use members by creating a “reciprocal contract” at account opening which outlined what they could expect from the Credit Union and what the Credit Union expected from them. Successful: 2) We increased the probability that people living in transitional housing would reach 15% of their savings goal within 6 months or less by giving them punch cards to track deposits and a tangible memento (a large, heavy, gold-colored coin) for completing their punch card. “Unsuccessful”: 3) We sent postcards to people who had used the VITA site to file their taxes in the previous year. One group was just reminded to make an appointment, the other group had an appointment automatically scheduled and they were instructed to call to re-schedule. There was no difference in whether they came to the VITA site again for the current year. “Unsuccessful”: 4) We attempted to increase usage of Credit Union services by sending low-touch members punch cards to track activity. This was ineffective in increasing use of services. We strongly believe that even “null results” or finding out that an idea doesn’t work is just as helpful as finding out an idea that does. This allows us to better understand what may or may not be actual barriers and it allows us to divert resources to other projects and ideas.
Is there any travel involved for the partner’s staff?
We host two invite-only workshops/conferences each year. One is in the Spring in San Francisco for FinTech partners and other FinTech companies. The other is in the Fall in Durham for traditional financial service providers (partners and others). We do ask that partners join for the appropriate workshop, and travel costs are covered by Common Cents. Is the conference you mentioned a forum to hear about and learn from the work of all of the organizations that you select this year? Years past? The workshops somewhat vary from year to year but generally include learnings from the current and prior years, deep dives into behavioral economics, and hands-on practice applying those insights and developing real-world, testable ideas.
Will this year’s efforts consider last year’s partners to participate in the second year of study?
A (tertiary) goal is to increase the capacity of organizations to continue to do their own behavioral audits and test ideas. We understand that this generally means doing multiple projects over multiple years. If both Common Cents and the partner agree that continued work would be fruitful, we would certainly love to continue those relationships. However, we envision that a second or third study will be increasingly more driven by the organization as Common Cents slowly steps back in their role. They should instead talk to Mariel (Traditional FSP) or Kristen (FinTech) and their Common Cents project lead.
Other than reaching 1.9 million people, how will you measure your desired return?
Our goal is to have measurable impact on the financial behavior of 1.9 million people. While this is slightly nuanced, it generally means that we can quantifiably measure increases in savings, decreases in debt, improvements to cash-flow management, and decreases in expenses that do not improve well-being.
Is it worth applying if we don’t have funding, even if we think our product is compelling enough given the Lab’s objectives? We are currently in the process of securing the funding necessary.
For FinTech, please reach out to Kristen Berman (Kristen@commoncentslab.org) to discuss specifics.
Will Common Cents propose which products to test or propose what to test within a certain product? How will the collaboration work with a company?
It may help to distinguish between products and features. We generally test features (jointly chosen by the partner and Common Cents) within a given product (chosen by the partner). For example, we used Digit’s platform and added on the ability to pre-commit to save part of your refund. With Self-Help, they were interested in increasing uptake of their refinance product, so we worked with them on the language of their letter. In some cases, we will work to develop a new product with a partner, which will then be tested. That product will be informed by the expertise of the partner AND the research and experience of Common Cents.
Is there a minimum user base for the test?
Generally, the larger the user base the better for a test (allows us to detect smaller differences), but it’s not uncommon to run tests with a few hundred per test group. More detailed answer: It greatly depends on the actual test: is the outcome binary (enrolled or didn’t enroll) or continuous (dollars saved); what is the base rate (i.e. currently saving $50/month on average); what difference between the test group and the control group would be “worth-it” or what difference we expect to see (i.e. the intervention would need to increase savings to $60/month to be worth it); and how much noise is around the average (i.e. most people actually save around $50 OR about half of the people don’t save anything and the other half save $100).