yes, looks like they've remapped SIP to the north server, in Michigan I think.
whoever was attacking the texas server must have started hitting the michigan one too.
yes, looks like they've remapped SIP to the north server, in Michigan I think.
whoever was attacking the texas server must have started hitting the michigan one too.
texas server is also still lagging bad.
sip LAGGED (3178 ms)
Hello,
Please advise if you are not seeing this issue completely resolved at this point, thank you!
Everything should be back to normal on them. We mitigated what looked like an attack on them earlier in the week and discovered an issue with some users that were using TCP to register with instead of UDP and that was causing issues since TCP held all the packets and released them at once causing issues and delays for everyone. We disabled TCP registrations because everyone should be using UDP.
Between blocking the attack and disabling TCP registrations (which were causing a quasi-attack on their own), everything appears completely stable on our end now.
Reseller support load is now drastically lower than it has been all week with only some typical support issues that seem unrelated.
Greenlantern - We are looking at your ticket, but everything looks fine on our end. We even did a test on your accounts by registering to them and they worked fine. Brandon will get in touch with you.
Also worth noting...we've noticed on a few users that if their device is sending too much info in the headers it can cause the MTU to be too big in the headers and cause issues. We control this on our ATAs to limit the amount of info they send but if you have BYOD devices configured to send things like codecs we don't support, a name in the from header, etc it could push it over the MTU limit in the headers and cause a busy signal or call failure.
Hi Tim,
Thanks for all the hard work you guys did the last few days. It was rough sledding, but things look great right now.
Any idea if/why these attacks are intentionally targeting voipo?
Great info on the MTU limits. We'll know now to look for extraneous info like extra codecs, names, etc. in case BYOD users add any of that.
Thanks again, and now everyone go perform your preferred good luck rituals.
The attacks hopefully should be mitigated. They weren't intentional...just certain BYOD devices causing loops (think forwarding back to itself spawning tens of thousands of calls). We have a lot of loop detection/prevention in place but there was a new case we'd never seen before where it was happening with devices connecting using TCP and registrations looping in a way.
So far everything seems to be smooth now. This was happening on SIP but not SIP7 but then at one point to isolate things earlier in week, we redirected a lot of traffic from SIP to SIP7 while we tested that server more thoroughly and that's why SIP7 started acting up too (the users that had devices looping got moved). Once we isolated it and blocked it, we got things back to normal and added in logic to prevent it.
Hopefully smooth sailing now.
Also for what it's worth, we are in the midst of expanding the BYOD network with several new servers for you guys to choose from so there will be more choices in more DCs for you guys soon.
In 2 years I have not seen anything to let us know how we are to have our devices setup. You say that we are using the incorrect Codecs, well we are trying to connect to you but we can't and the equipment is trying to figure out what Codec to use.Also worth noting...we've noticed on a few users that if their device is sending too much info in the headers it can cause the MTU to be too big in the headers and cause issues. We control this on our ATAs to limit the amount of info they send but if you have BYOD devices configured to send things like codecs we don't support, a name in the from header, etc it could push it over the MTU limit in the headers and cause a busy signal or call failure.
You do not offer the devices needed to support business class customers which forces me to use BYOD devices. The Grandstream ATA are absolute garbage! We have had an 80% failure rate (90% purchased through Voipo), but they are the cheapest ones around. I tried Patton Analog gateways with them locked to 711u and had the same connection issues. I switched to Cisco SPA8000 because they worked the best and easier to provision. And now you tell me after a year and a half later that I have them configured incorrectly? Really why couldn't you tell me that back then when I reached out to your tech support but instead I was told Voipo don't support BYOD. I didn't ask you to tell me how to program the SPA8000, I just needed to know what the problem was on your end so I could configure our SPA8000s accordingly.
I am still guessing if the problems are on my end or your end, I put in a ticket and give a time frame that we are experiencing issues with accounts and I get short responses that unless I provide a call reference number they cant help. Really? I can look right at the call logs on each account and tell you when they are having issues, simply its when they have 0 to 1 minute calls back to back to and from the same numbers.
In the past several weeks I have had many of my customers screaming, yes screaming at me about the service and after 2 years I am ready to throw the towel in. But I won't because I know this will work, the service does work, we just need more from our provider, our partner, to insure we provide great service. We need information from Voipo when the services are hindered BEFORE the customers start complaining. We need to know from you that you are receiving too much header information, or using the wrong Codecs BEFORE the customer starts complaining. I really want to stop the guess work, how am I supposed to work with you to insure my devices are configured and connecting correctly. I can't call anyone, and the ticketing system, well I know your techs are a busy as I am also, but its impossible to work on anything when you have to wait for a message that may come in the next few minutes or an hour or so. Not to mention if the calls go through then we assume its configured correctly.
I believe one of my accounts experienced this. I had a SPA8000 been in service for over a year, reconfigured it for a different account, worked fine for several months then in November and December highly intermittently we started receiving bogus incoming calls on all ports at the exact same time. The phone system would answer these calls by the thousands, and yes every call left a dead air voice mail message. I have pulled that unit out of service but not had the time to evaluate it on the bench to see if the unit is defective or if something else is causing the issues.just certain BYOD devices causing loops
I also had another SPA8000 at a different location that I believed was hacked as this one was facing the public side directly. I know its very dangerous but I was seeing if the call drops were being caused by a router so we gave it a public IP address and put it outside the firewall. Thousands of calls out of state to the same area code and exchanges were made within seconds of each other. I was on location when it was happening and no one was using the phone, but hundreds of calls were being made. I pulled that SPA8000 out of service and will evaluate the firmware. But my question is doesn't Voipo have anything in place to detect auto dialers being used and to shutdown the account? I found it by accident looking at the account's call logs. There were thousands of calls spanning over a few months, and we paid for all those calls. But how, as a reseller, can I control or stop this. I have limits set but they were never reached nor can I lower the limits as my clients would exceed them shutting down the service. So how do we monitor the activity from the outside looking in? I would like to see some real time evaluations and not have to manually read through the call logs after the fact.
Joe Siepka
Bookmarks