SEPA Bulk Payment File Software: A Complete How-To Guide
2026-04-25
You’re probably dealing with one of three situations right now.
Your finance team has an Excel sheet that “mostly works”, but every bank upload feels like a gamble. Your ERP still exports an old AEB format that nobody wants to touch. Or your developers are asking for an API because manual uploads have become the slowest part of the payment process.
That’s exactly where SEPA bulk payment file software earns its keep. In practice, the hard part isn’t generating XML. It’s getting from messy source data to a clean, bank-ready file without duplicate IDs, broken IBANs, or formatting errors that trigger rejection.
SEPA itself has been operational for credit transfers since 1 August 2014, and SEPA credit transfers accounted for 96% of all transfer volume within the eurozone in 2021, while credit transfers represented €105.6 trillion and 93% of total payment value in the euro area, according to Mambu’s overview of SEPA payment schemes. For UK businesses paying staff, suppliers, and cross-border partners, file-based processing is no longer a niche workflow. It’s standard operations.
What usually separates a smooth process from a painful one is preparation, validation, and consistency. If you get those right, the rest becomes routine.
Preparing Your Source Data for Seamless Conversion
Most rejected files start long before XML is generated. They start in a spreadsheet with merged cells, inconsistent dates, free-text amounts, copied IBANs with spaces, or reused payment references.

A major challenge for UK SMEs is converting legacy formats. Platforms that simplify converting Excel, CSV, and old AEB formats such as 34, 14, and 59 can reduce file rejection issues, which affect 28% of UK SMEs, by up to 40% through proper validation, as noted in this review of SEPA direct debit software and conversion challenges.
Prepare Excel files like a payment file, not a report
Excel is the most common starting point. It’s also where finance teams accidentally introduce the most avoidable errors.
A usable sheet should look like raw transaction data, not a management summary. One row should equal one payment or one collection. Keep headers simple and stable. Don’t use merged cells, subtotals, hidden columns, comments, or formulas that return inconsistent output.
The columns that matter most are usually these:
- Beneficiary or debtor name. Use the legal or operational name you want to appear in the payment file.
- IBAN. Keep it in a single column. Remove spaces before upload if possible.
- Amount. Store it as a number, not text with currency symbols.
- Execution date. Use one date format across the file.
- End-to-end reference. Make it unique for each row.
- Remittance information. Keep it concise and consistent.
Practical rule: If a human needs to “interpret” a cell before mapping it, the file isn’t ready.
Where teams go wrong is mixing internal notes into payment columns. A remittance field that contains invoice text, commentary, and approval notes often creates downstream problems. Keep operational notes in a separate column or separate file.
If your team works from exported spreadsheets, it also helps to lock a house template. That way every payroll run, supplier batch, or direct debit collection starts from the same structure. If you need an example of how to structure CSV data before conversion, this guide to CSV to SEPA XML preparation is useful.
CSV files need boring consistency
CSV is often cleaner than Excel because it strips away formatting. That’s good. It also means small inconsistencies become more visible.
Check these points before you upload:
- Delimiter choice. Make sure your export uses the expected separator consistently.
- Text encoding. Strange characters in names or references can cause avoidable validation issues.
- Decimal formatting. Don’t mix comma and full stop conventions in amount fields.
- Blank mandatory fields. Empty IBAN or amount fields should be fixed before conversion, not after rejection.
CSV works well when your ERP, accounting package, or payroll tool can export flat records reliably. It works badly when different teams edit the file in different spreadsheet applications and save over each other’s formatting rules.
JSON works best when your source system is already structured
JSON is usually the cleanest source format because field names are explicit. If your application already stores payment data in a structured way, JSON lets you avoid spreadsheet handling altogether.
The main discipline here is naming. Keep keys predictable, and make sure the same concept always maps to the same field. If one payload sends beneficiary_name and another sends payeeName, someone will eventually map the wrong value or build brittle logic around exceptions.
A good JSON payload also treats identifiers properly. Instruction identifiers, end-to-end references, and account data should be generated systematically, not entered ad hoc by users.
Legacy AEB files need conversion, not retyping
This is the part most articles skip.
A lot of UK-linked businesses, advisers, and shared service teams still receive or generate AEB 34, 14, or 59 files from older systems. These aren’t “wrong”. They’re just not the final format your bank expects for modern SEPA workflows.
Retyping data from AEB into Excel is the worst possible approach. It introduces human error, destroys traceability, and wastes time every single run. The right approach is to convert the legacy structure directly into valid SEPA XML while preserving the underlying transaction data.
That matters even more if your source system still stores historic account formats, old references, or rigid fixed-width layouts. Clean conversion software should parse those structures, map them properly, and surface anything that needs review before file generation.
Old formats aren’t the problem by themselves. Manual intervention is the problem.
If your source file is clean, your first bank submission is usually straightforward. If it isn’t, the XML generator ends up carrying the blame for issues that began upstream.
A Walkthrough Generating Your First SEPA XML File
It is 4:15 p.m. on payroll day. Finance has the spreadsheet, approvals are done, and nobody wants to spend the next hour guessing why the bank rejected an XML file. Your first run with ConversorSEPA should feel controlled: import the source data, confirm the payment type, map the fields, run validation, and generate a file you can submit with confidence.

Upload the file and choose the correct scheme
Start with the source file you already have. In ConversorSEPA, that might be Excel, CSV, JSON, or a legacy AEB file that still needs to be converted rather than retyped. Then choose the output scheme: SEPA Credit Transfer for outgoing payments or SEPA Direct Debit for collections.
That decision sets the rules for everything that follows. A supplier payment batch and a collection batch can look similar in a spreadsheet, but they do not share the same required fields, party roles, or mandate data. If the scheme is wrong, the later errors often look random even though the underlying problem started at the first screen.
Before you go any further, confirm the operational settings for the batch:
- Execution date matches the intended run date
- Debtor account is the correct funding account
- Currency and purpose fit your bank setup
- Transactions in the batch belong together from an approval and timing perspective
I tell new clients to pause here for thirty seconds. It is faster to split one mixed batch now than to explain later why two entities, three dates, and one account ended up in the same payment file.
Map columns with intent, not assumptions
Field mapping is where users either trust the process or start second-guessing it.
ConversorSEPA can suggest matches for common fields such as amount, IBAN, beneficiary name, and reference. Treat those suggestions as a starting point. Internal column headers are often misleading, especially in exports from ERPs or older accounting tools.
These fields deserve a manual check every time:
- Beneficiary name. Use the legal payee name, not an internal nickname or contact label.
- IBAN. Check that the column contains the full account number, not notes, fragments, or a domestic legacy field.
- End-to-end reference. Each payment needs a unique, deliberate value.
- Date. Make sure you are mapping the payment date, not the invoice date or posting date.
- Remittance information. Keep the text within ISO 20022 limits and avoid unsupported characters.
For mixed teams, this is one of the practical advantages of ConversorSEPA. Finance can map and review files in the UI, while developers can later automate the same logic through the ConversorSEPA API for automated SEPA file generation. That shared path matters because manual and automated workflows should produce the same output, not two slightly different versions of the truth.
If your organisation is also building surrounding systems, the same design discipline used in payment integrations applies to developing flexible e-commerce APIs. Consistent field naming and predictable payload structure reduce mapping mistakes on both sides.
Validate before you generate
Validation catches the problems that users rarely spot in a spreadsheet view. Invalid IBAN structures, duplicate references, missing mandatory fields, overlong remittance text, and bad dates are common first-run issues.
A useful validator does more than say “error”. It points to the row, the field, and the reason. That is the difference between fixing a file in five minutes and hunting through 800 rows with no clear starting point.
These are the checks I review first:
| Check | Why it matters | What to do if flagged |
|---|---|---|
| Invalid IBAN format | Banks reject structurally incorrect account data | Correct the source value, then revalidate |
| Duplicate end-to-end ID | Creates ambiguity and rejection risk | Generate a unique reference for each row |
| Overlong remittance text | Can break ISO field limits | Trim or standardise the text |
| Missing mandatory fields | Prevents complete XML generation | Fill the field in the source file or mapping layer |
A clean validation screen is your pre-submission control point. It does not guarantee the bank will accept the file, because bank-specific rules still vary, but it removes the preventable errors that cause the bulk of first-time failures.
Generate the XML and do a final operational review
Once validation passes, generate the XML and review the output before download. I always check the file name, payment count, control totals, and batch identity at this stage. Those details are easy to skip and expensive to reconstruct later.
File naming matters more than teams expect. payments-final-v3.xml is useless in an audit trail. A name that includes date, entity, account, and sequence number is easier to find, approve, and reconcile.
Before sending the file to the bank, confirm three points:
- The account used for submission matches the account declared in the file
- The batch belongs to the correct approval cycle
- The XML variant matches your bank’s accepted SEPA format
That is the complete first-run path I use with new clients. Start with a spreadsheet if that is what finance has today. If the source is still AEB, convert it directly. Once the manual process is stable, the same mapping and validation logic can be carried into automation without rebuilding the workflow from scratch.
Automating Payments with the ConversorSEPA JSON API
Manual upload works well for occasional batches. It becomes a bottleneck when payment data already lives inside an ERP, billing system, marketplace, treasury workflow, or custom app.

For developers, using a JSON API with 99.9% availability is important for automating SEPA file generation. It also helps prevent bank commissions from invalid files, which matters in a context where 94% of direct debits are processed in batch, as described in Microsoft’s documentation on SEPA credit transfer processing and ISO 20022 file generation.
When API automation is the right move
You should automate when at least one of these is true:
- Your source data already exists in software. Re-exporting to Excel adds friction for no benefit.
- You run frequent batches. Repeating the same manual mapping every day or week is avoidable.
- You need auditability. API-driven generation creates a clearer handoff between system output and bank submission.
- You support multiple clients or entities. Shared service teams and software providers benefit from standardised generation logic.
This is also where broader API design matters. If your team is building payment functionality into a platform, the principles used in developing flexible e-commerce APIs apply surprisingly well to SEPA workflows too. Stable payloads, predictable error handling, and clear versioning reduce long-term support overhead.
What the API workflow usually looks like
At a practical level, the automation flow is straightforward.
- Your application gathers payment data.
- It sends a structured JSON payload to the conversion API.
- The service validates the payload and maps it to the relevant SEPA schema.
- It returns the generated XML file or a retrievable result.
- Your system stores the result, forwards it for approval, or pushes it into the next operational step.
The important design choice is deciding where validation should happen first. In most implementations, basic validation should happen in your own system before the payload is sent. That includes required fields, amount presence, and identifier generation. Schema-level and SEPA-specific validation can then happen during conversion.
If you need the technical endpoint model, payload examples, and authentication details, the ConversorSEPA API documentation is the most direct reference point.
A simple JSON example
Below is a simplified example of what a credit transfer request can look like in practice. Field names vary by implementation, but the structure is what matters.
import requests
api_key = "YOUR_API_KEY"
payload = {
"scheme": "SCT",
"debtor": {
"name": "Example UK Ltd",
"iban": "DE89370400440532013000"
},
"execution_date": "2026-04-30",
"payments": [
{
"instr_id": "PAY20260430-001",
"end_to_end_id": "INV-10001",
"creditor_name": "Supplier One GmbH",
"creditor_iban": "FR7630006000011234567890189",
"amount": "1250.00",
"remittance": "Invoice 10001"
},
{
"instr_id": "PAY20260430-002",
"end_to_end_id": "INV-10002",
"creditor_name": "Supplier Two BV",
"creditor_iban": "NL91ABNA0417164300",
"amount": "890.00",
"remittance": "Invoice 10002"
}
]
}
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
}
response = requests.post(
"https://api.example.com/sepa/convert",
json=payload,
headers=headers,
timeout=30
)
print(response.status_code)
print(response.text)
The pattern matters more than the sample syntax. Keep identifiers unique, send clean account data, and separate scheme logic from application logic.
A short visual demo often helps developers align implementation and expectations:
What works in production
The API projects that stay maintainable usually share the same habits:
- Generate IDs centrally. Don’t let users type instruction IDs manually.
- Store both payload and resulting XML. That gives you a reliable audit trail.
- Separate validation errors from bank submission errors. They are not the same operational problem.
- Version your mapping logic. Payment files live longer than many developers expect.
If your ERP is the source of truth, make it the source of payment data too. Don’t export to spreadsheets just because that’s how the team has always done it.
Security and Workflow Best Practices for Finance Teams
File conversion is only one part of a safe payment process. The bigger risk often sits between “XML generated” and “file submitted”.

Finance teams should treat payment files as sensitive operational assets. They contain account identifiers, beneficiary details, amounts, and timing information. That means convenience cannot be the only design principle. Good workflow discipline is part of payment security.
Keep data exposure short and deliberate
A sound cloud workflow should minimise how long payment data remains accessible. Short retention windows reduce risk if files are left on shared machines, emailed around internally, or uploaded from unsecured devices.
That’s why automatic deletion matters. A service architecture that deletes uploaded data after a short period, such as ten minutes, reduces the chance that yesterday’s payroll or supplier file sits around indefinitely in a system no one is actively monitoring.
If your team is revisiting wider internal controls, this overview of cybersecurity and accounting is a useful companion read. The same principles apply here: least access, clear accountability, and controlled handoffs.
Build a workflow that assumes human mistakes will happen
Most payment incidents don’t come from complex attacks. They come from routine operational mistakes.
Adopt these habits:
- Use a naming convention that means something. Include date, entity, account, payment type, and sequence.
- Separate preparation from approval. The person who creates the file shouldn’t be the only person who approves submission.
- Keep a master mandate record for direct debits. Don’t rely on inbox searches when a collection query appears.
- Store the final submitted file in one controlled location. Avoid local desktop copies as the de facto archive.
Teams often skip these controls because the monthly run feels familiar. Familiarity is exactly what causes shortcuts. Once a process “always works”, people stop checking the parts that eventually fail.
Standardise exceptions, don’t improvise them
Every finance department has edge cases. Refunds. Urgent supplier corrections. One-off foreign beneficiaries. Legacy account records that haven’t been normalised yet.
The wrong response is to invent a new manual workaround every time. The better response is to define exception handling in advance.
| Workflow area | Weak practice | Strong practice |
|---|---|---|
| File storage | Saved on personal desktops | Stored in a controlled shared location |
| Approvals | One person prepares and submits | Two-person check before bank upload |
| Corrections | Users edit XML manually | Correct the source file and regenerate |
| Mandates | Scattered across email and folders | Central record with clear ownership |
Security in payment operations usually looks boring. That’s a good sign.
When the workflow is tidy, audits become easier, staff handovers become less risky, and payment runs stop depending on one person remembering how things were done last quarter.
How to Troubleshoot Common SEPA File Rejection Errors
Bank rejection messages are often technically correct and operationally unhelpful. “Invalid instruction identifier” may be true, but it doesn’t tell your team whether the issue began in the spreadsheet, the export logic, or the bank’s instant-payment rule set.
The fastest way to fix rejections is to treat them as pattern problems. Look at the message, trace it back to the source field, correct the upstream data, and regenerate the file. Don’t patch the XML manually unless you enjoy creating new problems.
SEPA Instant bulk files are particularly sensitive to identifier and batch composition mistakes. A missing unique <InstrId> can cause an 18% rejection rate, while mixing INST and standard payments in one file leads to a 22% error rate, according to Bank of Ireland’s guidance on SEPA Instant bulk payments and file validation.
Common SEPA rejection errors and fixes
| Bank Rejection Message (Symptom) | Likely Cause | Solution in ConversorSEPA |
|---|---|---|
| Invalid IBAN | The source file contains spaces, wrong country structure, or the wrong column was mapped | Correct the IBAN in the source data, remap if needed, then run validation again |
| Duplicate Instruction ID | The file reuses the same <InstrId> across multiple rows or runs |
Generate unique instruction IDs and re-export the batch |
| Duplicate End-to-End ID | A reference value was copied down across many payments | Replace reused values with unique references before conversion |
| Unsupported Character Set | The remittance or name field contains characters the bank won’t accept | Clean the source text and keep references simple |
| Failed Schema Validation | A mandatory field is missing or mapped incorrectly | Review field mapping and required values, then regenerate |
| Mixed Instant and standard batch | Instant markers were combined with non-instant transactions | Split the file by scheme and create separate outputs |
Read the rejection by category
Some errors are data errors. Invalid IBANs, blank names, malformed dates. These belong in the source file.
Others are batch design errors. Mixed payment types, reused execution settings, inconsistent identifiers. These often come from how the file was grouped.
A third category is format or schema errors. These usually mean the file was generated with missing required values or the wrong output structure for the bank.
Don’t start with the XML. Start with the row that produced the XML.
That single habit cuts troubleshooting time because it keeps your attention on the source system and the mapping layer, which is where most fixes belong.
Fix the cause, not just the symptom
When a bank says “duplicate identifier”, many teams change the visible value in one record and try again. That might get today’s file through. It doesn’t solve the process that generated duplicate identifiers in the first place.
A better response looks like this:
- Find every affected row, not just the first rejected item.
- Identify the generation rule that produced the duplicate.
- Update the source logic or template so the issue doesn’t recur.
- Revalidate the whole file, not only the corrected rows.
If you want a deeper reference for pre-submission checks, this guide on how to validate a SEPA file before bank upload is worth keeping handy.
The rejection messages worth taking seriously
Some warnings can wait. These can’t:
- Identifier duplication because it affects traceability and acceptance
- IBAN or account structure errors because the bank can’t route the payment correctly
- Instant batch composition errors because the bank may reject the entire batch rather than a single line
- Schema failures because they usually indicate a structural issue, not a one-off typo
When teams build a repeatable troubleshooting process, rejections stop feeling random. They become maintenance work. That’s a much easier problem to manage.
Future-Proofing Your Payments Beyond Basic Conversion
A converter solves today’s file problem. A durable payment workflow prepares you for the next scheme change, the next banking requirement, and the next integration request from your own systems.
That’s why future-proofing starts with choosing tools that do more than transform one format into another. You want support for direct debit mandates, structured validation, adaptable mappings, and a path from manual handling to automation without rebuilding the whole process.
Support the workflows around the file
For direct debits, mandate handling matters just as much as XML generation. If your team can generate PDF mandates, keep a clean mandate register, and tie each collection run back to a valid authorisation record, you reduce both compliance friction and operational confusion.
Legacy data support matters too. Businesses rarely replace every source system at once. A practical setup keeps older exports usable while the rest of the finance stack modernises.
Prepare for instant and hybrid payment operations
SEPA Instant is the clearest example of why basic conversion is no longer enough. According to Bottomline, with full EU adoption of SEPA Instant Credit Transfer bulk files expected by late 2025, UK businesses face integration challenges, and only 15% of current SME software supports the required hybrid connectivity, which points to the need for more future-ready tools in this area, as discussed in Bottomline’s paper on real-time and cross-network payment connectivity.
That matters because UK businesses often need to operate across SEPA and domestic schemes at the same time. Software that can evolve with those operational demands is much more useful than software that only outputs one fixed file type from one fixed input.
The practical goal isn’t to chase every new payment trend. It’s to avoid being trapped by brittle workflows. When your file generation, validation, approvals, mandates, and API automation fit together cleanly, payment operations become easier to adapt.
If your team is still juggling spreadsheets, bank portals, and legacy exports by hand, ConversorSEPA is a straightforward way to convert Excel, CSV, JSON, and older AEB files into valid SEPA XML, and it also gives developers an API path when manual uploads no longer scale.
Frequently Asked Questions
- What source file formats can SEPA bulk payment software convert?
- Most SEPA bulk payment tools accept Excel (.xlsx), CSV, and JSON as input formats. Some also support legacy banking formats such as AEB 34, 14, and 59, converting them directly into valid SEPA XML without manual retyping. The key is ensuring your source data is clean and consistently structured before conversion.
- How do I avoid SEPA file rejections when submitting bulk payments?
- The most common rejections come from invalid IBANs, duplicate instruction IDs, overlong remittance text, and missing mandatory fields. Running validation before generating the XML catches these issues early. Always fix the source data rather than editing the XML manually, and use a naming convention that supports auditability.
- When should I switch from manual upload to API-based SEPA file generation?
- API automation makes sense when your payment data already lives in software like an ERP or billing platform, when you run frequent batches, or when you need a clear audit trail between system output and bank submission. Manual upload works for occasional batches, but becomes a bottleneck at scale.
- What security practices should finance teams follow for SEPA payment files?
- Treat payment files as sensitive operational assets. Use automatic deletion of uploaded data after processing, separate file preparation from approval, store final submitted files in a controlled location, and adopt a consistent naming convention. The person who creates the file should not be the only person who approves submission.