So I have this 12s6p pack I built. Haven’t used it for 2 or 3 weeks. Before all p groups would drop about the same in WOT. Now p group 4 stands out and drops 0.02 to 0.03v more than other p groups in WOT.
Also after the 2-3 weeks p group 4 had a 0.015v lower than other p groups. All p groups used to be around 0.008 or so before.
I was wondering if there could be a bad cell in p group 4 so when in WOT the load is distributed among 5 instead of 6 individual cells. Which would explain why there is a 0.03v incremental voltage drop on P4. 0.03v/6 is 0.005v. So that individual cell not providing amps has to be carried by the other 5 individual cells, hence the incrememtal voltage drop.
Upon charging the whole pack to 4.1v the voltage difference between all p groups returned to 0.005v. But after coming back from a 5mile ride, or 2v ride, the p group 4 was back to 0.01v drift. During this ride I noticed via BT BMS that P4 had a 0.03v higher boltage drop in WOT.
I tried to bottom balance to 38v and top balance to 50v but p4 would not go past 4.165v while rest of p groups went on past 4.165v no problem to 4.18v and then I just stopped it as p4 wasn’t budging past 4.165v.
Thinking it’s a partially bad cell in p4. Not completely bad but seems like it’s gonna be a problem down the line
I’d rather fix it now so the cells in p4 don’t exert more than other p groups. I got 4 other individual cells. I wonder if I should tear the pack apart and find the “bad” cell.
But im also worried im chasing a wild turkey here. And maybe its just normal?
Also side note that maybe @Battery_Mooch knows about: Do smart BMSs just have one ADC and multiplex it across all the cell groups? I’m struck that I wouldn’t normally trust a cheapish device claiming to give 4 significant figures on a voltage reading, especially on many many inputs and in an uncontrolled environment with a couple of switching regulators around it.
An already fairly expensive 10-bit ADC can measure 1024 values, even if they’re smart and only use that between 2-5V where cells should always be, that’s like 0.003V resolution if there’s 0 least significant bits of error. I can’t see them splashing on a 12 bit for this so that last decimal place feels… ambitious
The next time some pensioner who thinks he’s funny pulls me over to ask about “that fanshy lukin conthraption” that’s all I’m referring to it as. I’ll leave it up to you to work out what version of an offensive Irish accent to decode that with
Yes, typically most BMS controller chips and MCU’s do that. It saves a lot of money and truly simultaneous reading of multiple channels is rarely needed.
Agreed. It is silly IMO to assume they are getting much beyond eight or nine effective ADC bits of resolution.
Yes, the inputs are (low pass) R-C filtered and due to the low frequency and duty cycle for these measurements they can integrate multiple readings and even average, or otherwise process, multiple readings in software but that can only get you so far.
They might actually use 12-bit, or perhaps even higher, ADC’s but there is a huge difference between accuracy and precision.
Yeah cool cool, mostly wanted to get a sanity check. If I ever get my hands on one of those I’ll take it apart and see, there’s a chance they got lucky with an LCSC MCU with a 12 bit ADC in it but
this is my bigger concern given it’s invariably cheap components and needs to work in a wide variety of applications.
Incidentally I think I saw in a datasheet that BMS ICs do have a decent voltage reference specifically for the cutoff point (4.18V or something like that), even if that doesn’t help the MCU based models/sections of models. will have to check if I’m imagining that
I’m not too sure about that, at least for the controllers from the big manufacturers, as they typically allow a wide range of cell charge voltages and chemistries (with different charge voltages). The LTC6813 has a 3.0V reference IIRC.